SlideShare a Scribd company logo
1 of 73
Download to read offline
Agile integration
architecture
Using lightweight integration runtimes to implement
a container-based and microservices-aligned
integration architecture
222Home
Authors
Chapter 1: Integration has changed
Section 1: The Impact of Digital Transformation on Integration
Chapter 2: The journey so far: SOA, ESBs and APIs
Chapter 3: The case for agile integration architecture
How to navigate the book .............................................................................................................................
The impact of digital transformation ...........................................................................................................
Microservice architecture ............................................................................................................................
The forming of the ESB pattern ...................................................................................................................
The value of application integration for digital transformation ..................................................................
Agile integration architecture ......................................................................................................................
What went wrong for the centralized ESB pattern? ....................................................................................
Aspect 3: Cloud-native integration infrastructure .......................................................................................
Microservices architecture: A more agile and scalable way to build applications .....................................
Aspect 1: Fine-grained integration deployment .........................................................................................
The API economy and bi-modal IT ..............................................................................................................
How has the modern integration runtime changed to accommodate agile integration architecture? ......
A comparison of SOA and microservice architecture? ................................................................................
Aspect 2: Decentralized integration ownership ..........................................................................................
The rise of lightweight runtimes ..................................................................................................................
5
7
8
9
12
13
14
16
16
18
20
20
9
22
25
10
10
22
25
24
Contents:
3
Contents: Chapter 4: Aspect 1: Fine-grained integration deployment
Section 2: Exploring agile integration architecture in detail
What characteristics does the integration runtime need? ..........................................................................
Granularity ....................................................................................................................................................
Conclusion on fine-grained integration deployment ...................................................................................
Lessons Learned ...........................................................................................................................................
27
27
27
29
30
31
32
Chapter 6: Aspect 3: Cloud native integration infrastructure
Chapter 5: Aspect 2: Decentralized integration ownership
Cattle not pets ...............................................................................................................................................
Integration pets: The traditional approach ..................................................................................................
Decentralizing integration ownership ..........................................................................................................
Does decentralized integration also mean decentralized infrastructure ...................................................
Moving to a decentralized, business-focused team structure ....................................................................
Benefits for cloud .........................................................................................................................................
Big bangs generally lead to big disasters .....................................................................................................
Prioritizing Project Delivery First ..................................................................................................................
Enforcing governance in a decentralized structure .....................................................................................
Evolving the role of the Architect .................................................................................................................
How can we have multi-skilled developers? ................................................................................................
Conclusions on decentralized integration ownership .................................................................................
Lessons Learned ...........................................................................................................................................
Traditional centralized technology-based organization ..............................................................................
45
46
47
33
33
35
37
47
47
36
38
39
40
41
43
36
Breaking up the centralized ESB ...................................................................................................................
3Home
4
Contents:
What’s so different with cattle .....................................................................................................................
Pros and cons ...............................................................................................................................................
Application and integration handled by the same team .............................................................................
Common infrastructure enabling multi-skilled development .....................................................................
Portability: Public, private, multicloud .........................................................................................................
Conclusion on cloud native integration infrastructure ................................................................................
Lessons Learned ...........................................................................................................................................
49
50
52
52
55
56
57
58
59
Chapter 8: Agile integration architecture for the Integration Platform
Section 3: Moving Forward with an Agile Integration Architecture
Chapter 7: What path should you take?
What is an integration platform? ..................................................................................................................
The IBM Cloud Integration Platform ............................................................................................................
Emerging use cases and the integration platform .......................................................................................
Appendix One: References
Don’t worry…we haven’t returned to point-to-point ....................................................................................
Deployment options for fine-grained integration .........................................................................................
Agile integration architecture and IBM ........................................................................................................
65
72
59
63
60
63
63
63
61
Integration cattle: An alternative lightweight approach .............................................................................
4Home
5
	Kim Clark
Integration Architect
kim.clark@uk.ibm.com
Kim is a technical strategist on IBMs integration
portfolio working as an architect providing guidance
to the offering management team on current trends
and challenges. He has spent the last couple of
decades working in the field implementing
integration and process related solutions.
Tony Curcio
Director Application Integration
tcurcio@us.ibm.com
After years of implementing integration solutions
in a variety of technologies, Tony joined the IBM
offering management team in 2008. He now leads
the Application Integration team in working with
customers as they adopt more agile models for
building integration solutions and embrace cloud
as part of their IT landscape.
Nick Glowacki
Technical Specialist
nick.glowacki@ibm.com
Nick is a technical evangelist for IBMs
integration portfolio working as a
technical specialist exploring current
trends and building leading edge solutions.
He has spent the last 5 years working in
the field and guiding a series of teams
through their microservices journey.
Before that he spent 5+ years in various
other roles such as a developer, an architect
and a IBM DataPower specialist. Over the
course of his career he’s been a user of
node, xsl, JSON, Docker, Solr, IBM API
Connect, Kubernetes, Java, SOAP, XML,
WAS, Docker, Filenet, MQ, C++, CastIron,
IBM App Connect, IBM Integration Bus.
Authors
Sincere thanks go to the following people for their
significant and detailed input and review of the material:
Carsten Bornert, Andy Garratt, Alan Glickenhouse,
Rob Nicholson, Brian Petrini, Claudio Tagliabue,
and Ben Thompson.
THANK YOU
5Home
6
Executive Summary
The organization pursuing digital transformation must embrace
new ways to use and deploy integration technologies, so they can
move quickly in a manner appropriate to the goals of multicloud,
decentralization and microservices. The application integration layer
must transform to allow organizations to move boldly in building new
customer experiences, rather than forcing models for architecture
and development that pull away from maximizing the organization’s
productivity.
Many organizations have started embracing agile application techniques
such as microservice architecture and are now starting to see the
benefits of that shift. This approach complements and accelerates
an enterprise’s API strategy. Businesses should also seek to use this
approach to modernize their existing ESB infrastructure to achieve
more effective ways to manage and operate their integration services
in their private or public cloud.
This book explores the merits of what we refer to as agile integration
architecture1
- a container-based, decentralized and microservice-
aligned approach for integration solutions that meets the demands
of agility, scalability and resilience required by digital transformation.
6Home
1
Note that we have used the term “lightweight integration” in the past, but have moved to the more appropriate “agile integration architecture”.
Agile integration architecture enables building, managing and operating effectively and efficiently to achieve the goals of digital
transformation. It includes three distinct aspects that we will explore in detail:
a) Fine-grained integration deployment | b) Decentralized integration ownership and | c) Cloud-native integration infrastructure
77Home
Chapter 1: Integration has changed
Explores the effect that digital transformation
has had on both the application and integration
landscape, and the limitations of previous
techniques.
Chapter 2: The journey so far: SOA, ESBs
and APIs Explores what led us up to this point,
the pros and cons of SOA and the ESB pattern,
the influence of APIs and the introduction of
microservices architecture.
Chapter 3: The case for agile integration
architecture Explains how agile integration
architecture exploits the principles of
microservices architecture to address these
new needs.
Chapter 4: Aspect 1: Fine-grained
integration deployment Addresses the
benefits an organization gains by breaking
up the centralized ESB.
Chapter 5: Aspect 2: Decentralized
integration ownership Discusses how
shifting from a centralized governance and
development practice creates new levels of
agility and innovation.
Chapter 6: Aspect 3: Cloud native
integration infrastructure Provides a
description of how adopting key technologies
and practices from the cloud native application
discipline can provide similar benefits to
application integration.
Chapter 7:
What path should you take?
Explores several ways agile integration
architecture can be approached
Chapter 8: Agile integration
architecture for the Integration
Platform Surveys the wider landscape
of integration capabilities and relates
agile integration architecture to other
styles of integration as part of a holistic
strategy.
How to navigate the book The book is divided into three sections.
Section 1: The Impact of
Digital Transformation on
Integration
Section 2: Exploring agile
integration architecture
in detail
Section 3: Moving
Forward with an Agile
Integration Architecture
8
The impact of digital transformation
The rise of the digital economy, like most of the seismic technology shifts over the past several
centuries, has fundamentally changed not only technology but business as well. The very concept
of “digital economy” continues to evolve. Where once it was just a section of the economy that was
built on digital technologies it has evolved becoming almost indistinguishable from the “traditional
economy” and growing to include almost any new technology such as mobile, the Internet of Things,
cloud computing, and augmented intelligence.
At the heart of the digital economy is the basic need to connect disparate data no matter where
it lives. This has led to the rise of application integration, the need to connect multiple applications
and data to deliver the greatest insight to the people and systems who can act on it. In this section
we will explore how the digital economy created and then altered our concept of application
integration.
- Chapter 1: Integration has changed
Explores the effect that digital transformation has had on both the application and integration
landscape, and the limitations of previous techniques.
- Chapter 2: The journey so far: SOA, ESBs and APIs
Explores what led us up to this point, the pros and cons of SOA and the ESB pattern, the influence
of APIs and the introduction of microservices architecture.
- Chapter 3: The case for agile integration architecture
Explains how agile integration architecture exploits the principles of microservices architecture
to address these new needs.
Section 1:
The Impact of Digital Transformation on Integration
88Home
9
changes in how organizations are building solutions. Progressive IT
shops have sought out, and indeed found, more agile ways to develop
than were typical even just a few years ago.
Home 9
To drive new customer experiences
organizations must tap into an
ever-growing set of applications,
processes and information sources
– all of which significantly expand
the enterprise’s need for
and investment in
integration capabilities.
Chapter 1: Integration has changed
The impact of digital transformation
Over the last two years we’ve seen a tremendous acceleration in the
pace that customers are establishing digital transformation initiatives.
In fact, IDC estimates that digital transformation initiatives represent
a $20 trillion market opportunity over the next 5 years. That is a
staggering figure with respect to the impact across all industries and
companies of all sizes. A primary focus of this digital transformation
is to build new customer experiences through connected experiences
across a network of applications that leverage data of all types.
However, bringing together these processes and information sources
at the right time and within the right context has become increasingly
complicated. Consider that many organizations have aggressively
adopted SaaS business applications which have spread their key data
sources across a much broader landscape. Additionally, new data
sources that are available from external data providers must be
injected into business processes to create competitive differentiation.
Finally, AI capabilities - which are being attached to many
customer-facing applications - require a broad range of information
to train, improve and correctly respond to business events. These
processes and information sources need to be integrated by making
them accessible synchronously via APIs, propagated in near real time
by event streams, and a multitude of other mechanisms, more so
than ever before.
It is no wonder that this growing complexity has increased the
enterprise’s need for and investment in integration capabilities.
The pace of these investments, in both digital transformation
generally and integration specifically, have led to a series of
2
IDC MaturityScape Benchmark: Digital Transformation Worldwide, 2017, Shawn Fitzgerald.
10
2. Expertise of the endpoints:
Each system has its own peculiarities that must
be understood and responded to. Modern
integration includes smarts around complex
protocols and data formats, but it goes much
further than that. It also incorporates
intelligence about the actual objects, business
and functions within the end systems.
Application integration tooling is compassionate
- understanding how to work with each system
distinctly. This knowledge of the endpoint must
include not only errors, but authentication
protocols, load management, performance
optimization, transactionality, idempotence,
and much, much more. By including such
features “in the box”, application integration
yields tremendous gains in productivity over
coding, and arguably a more consistent level
of enterprise-class resiliency.
The value of application integration for digital
transformation
1. Effectively address disparity:
One of the key strengths of integration tooling
is the ability to access data from any system
with any sort of data in any sort of format and
build homogeneity. The application landscape
is only growing more diverse as organizations
adopt SaaS applications and build new solutions
in the cloud, spreading their data further across
a hybrid set of systems. Even in the world of
APIs, there are variations in data formats and
structures that must be addressed.
Furthermore, every system has subtleties in the
way it enables updates, and surfaces events.
The need for the organization to address
information disparity is therefore growing at
that same pace, and application integration
must remain equipped to address the challenge
of emerging formats.
HomeHome 10
When we consider the agenda for building new customer experiences and focus on how data is
accessed and made available for the services and APIs that power these initiatives, we can clearly
recognize several significant benefits that application integration brings to the table.
11
4. Enterprise-grade artifacts:
Integration flows developed through application
integration tooling inherit a tremendous amount
of value from the runtime. Users can focus on
building the business logic without having to
worry about the surrounding infrastructure.
The application integration runtime includes
enterprise-grade features for error recovery,
fault tolerance, log capture, performance
analysis, message tracing, transactional update
and recovery. Additionally, in some tools the
artifacts are built using open standards and
consistent best practices without requirements
for the IT team to be experts in those domains.
Each of these factors (data disparity,
expert endpoints, innovation through
data, and enterprise grade artifacts)
is causing a massive shift in how an
integration architecture needs to be
conceived, implemented and managed.
The result is that organizations, and
architects in particular, are reconsidering
what integration means in the new digital
age. Enter agile integration architecture,
a container-based, decentralized and
microservices-aligned approach for
integration solutions that meets the
demands of agility, scalability and
resilience required by digital
transformation.
The integration landscape is changing
apace with enterprise and marketplace
computing demands, but how did we get
from SOA and ESBs to modern,
containerized, agile integration
architecture?
HomeHome 11
3. Innovation through data:
Applications in a digital world owe much of their
innovation to their opportunity to combine data
that is beyond their boundaries and create new
meaning from it. This is particularly visible in
microservices architecture, where the ability of
application integration technologies to
intelligently draw multiple sources of data
together is often a core business requirement.
Whether composing multiple API calls together
or interpreting event streams, the main task of
many microservices components is essentially
integration.
Application Integration benefits organizations building digital transformation solutions by
effectively addressing information disparity, providing expert knowledge of application
endpoints, easily orchestrating activities across applications, and lowering the cost of
building expert-level artifacts.
12
Chapter 2: The journey so far: SOA, ESBs and APIs
Before we dive into agile integration
architecture, we first need to understand what
came before in a little more detail. In this
chapter we will briefly look at the challenges
of SOA by taking a closer look at what the ESB
pattern was, how it evolved, where APIs came
onto the scene, and the relationship between all
that and microservices architecture.
Let’s start with SOA and the ESB and what
went wrong.
As we started the millennium, we saw the
beginnings of the first truly cross-platform
protocol for interfaces. The internet, and with it
HTTP, had become ubiquitous, XML was limping
its way into existence off the back of HTML, and
the SOAP protocols for providing synchronous
web service interfaces were just taking shape.
Relatively wide acceptance of these standards
hinted at a brighter future where any system
could discover and talk to any other system via
a real-time synchronous remote procedure call,
without reams of integration code as had been
required in the past.
From this series of events, service-oriented architecture was born. The core purpose
of SOA was to expose data and functions buried in systems of record over well-formed,
simple-to-use, synchronous interfaces, such as web services. Clearly, SOA was about more
than just providing those services, and often involved some significant re-engineering to align
the back-end systems with the business needs, but the end goal was a suite of well-defined
common re-usable services collating disparate systems. This would enable new applications
to be implemented without the burden of deep integration every time, as once the integration
was done for the first time and exposed as a service, it could be re-used by the next application.
However, this simple integration was a one-sided equation. We might have been able to
standardize these protocols and data formats, but the back-end systems of record were
typically old and had antiquated protocols and data formats for their current interfaces.
Figure 1 below shows where the breakdown typically occurred. Something was needed
to mediate between the old system and the new cross-platform protocols.
The forming of the ESB
pattern
Systems
ofRecordEngagement
Applications
Integration
Runtime
Integration Runtime
Scope of the ESB pattern
Asynchronous integration
Request/response integration
Integration runtime
Enterprise API
Figure 1. Synchronous centralized exposure pattern
Home 12
13
While many large enterprises successfully
implemented the ESB pattern, the term is often
disparaged in the cloud-native space, and
especially in relation to microservices
architecture. It is seen as heavyweight and
lacking in agility. What has happened to make
the ESB pattern appear so outdated?
SOA turned out to be a little more complex than
just the implementation of an ESB for a host of
reasons—not the least of which was the question
of who would fund such an enterprise-wide
program. Implementing the ESB pattern itself
also turned out to be no small task.
The ESB pattern often took the “E” in ESB very
literally and implemented a single infrastructure
for the whole enterprise, or at least one for each
significant part of the enterprise. Tens or even
hundreds of integrations might have been
installed on a production server cluster, and if
that was scaled up, they would be present on
every clone within that cluster. Although this
heavy centralization isn’t required by the ESB
pattern itself, it was almost always present in
the resultant topology. There were good
reasons for this, at least initially: hardware and
software costs were shared, provisioning of the
servers only had to be performed once, and due
to the relative complexity of the software, only
one dedicated team of integration specialists
needed to be skilled up to perform the
development work.
The centralized ESB pattern had the potential to
deliver significant savings in integration costs if
interfaces could be re-used from one project to
the next (the core benefit proposition of SOA).
However, coordinating such a cross-enterprise
initiative and ensuring that it would get
continued funding—and that the funding only
applied to services that would be sufficiently
re-usable to cover their creation costs—proved
to be very difficult indeed. Standards and
tooling were maturing at the same time as the
ESB patterns were being implemented, so the
implementation cost and time for providing a
single service were unrealistically high.
Often, line-of-business teams that were
expecting a greater pace of innovation in
their new applications became
increasingly frustrated with SOA, and by
extension the ESB pattern.
Some of the challenges of a centralized
ESB pattern were:
• Deploying changes could potentially
destabilize other unrelated interfaces
running on the centralized ESB.
• Servers containing many integrations
had to be kept running and patched live
wherever possible.
This synchronous exposure pattern via web
services was what the enterprise services bus
(ESB) term was introduced for. It’s all in the
name—a centralized “bus” that could provide
web “services” across the “enterprise”.
We already had the technology (the integration
runtime) to provide connectivity to the
back-end systems, coming from the preceding
hub-and-spoke pattern. These integration
runtimes could simply be taught to offer
integrations synchronously via SOAP/HTTP,
and we’d have our ESB.
What went wrong for the
centralized ESB pattern?
ESB patterns have had
issues ensuring continued
funding for cross-enterprise
initiatives since those do
not apply specifically within
the context of a business
initiative.
Home 13
14Home 14
• Topologies for high availability and disaster
recovery were complex and expensive.
• For stability, servers typically ran many
versions behind the current release of
software reducing productivity.
• The integration specialist teams often didn’t
know much about the applications they were
trying to integrate with.
• Pooling of specialist integration skilled people
resulted in more waterfall style engagement
with application teams.
• Service discovery was immature so
documentation became quickly outdated.
The result was that creation of services by
this specialist SOA team became a bottleneck
for projects rather than the enabler that it was
intended to be. This typically gave by
association the centralized ESB pattern
a bad name.
Formally, as we’ve described, ESB is an
architectural pattern that refers to the exposure
of services. However, as mentioned above, the
term is often over-simplified and applied to the
integration engine that’s used to implement the
pattern. This erroneously ties the static and
aging centralized ESB pattern with integration
engines that have changed radically over the
intervening time.
Integration engines of today are significantly
more lightweight, easier to install and use, and
can be deployed in more decentralized ways
that would have been unimaginable at the time
the ESB concept was born. As we will see, agile
integration architecture enables us to overcome
the limitations of the ESB pattern.
If you would like a deeper introduction into
where the ESB pattern came from and a
detailed look at the benefits, and the challenges
that came with it, take a look at the source
material for this section in the following article:
http://ibm.biz/FateOfTheESBPaper
External APIs have become an essential part of
the online persona of many companies, and are
at least as important as its websites and mobile
applications. Let’s take a brief look at how that
evolved from the maturing of internal SOA
based services.
SOAP-style RPC interfaces proved complex
to understand and use, and simpler and more
consistent RESTful services provided using
JSON/HTTP became a popular mechanism.
But the end goal was the same: to make
functions and data available via
standardized interfaces so that new
applications could be built on top of them
more quickly.
With the broadening usage of these
service interfaces, both within and
beyond the enterprise, more formal
mechanisms for providing services were
required. It quickly became clear that
simply making something available over
a web service interface, or latterly as a
RESTful JSON/HTTP API, was only part
of the story.
That service needed to be easily
discovered by potential consumers,
who needed a path of least resistance
for gaining access to it and learning how
to use it. Additionally, the providers of the
service or API needed to be able to place
controls on its usage, such as traffic
control and an appropriate security
model. Figure 2 below demonstrates how
the introduction of service/API gateways
effects the scope of the ESB pattern.
The API economy and
bi-modal IT
15
Figure 2. Introduction of service/API gateways internally and externally
The typical approach was to separate the role of service/API exposure out into a separate gateway.
These capabilities evolved into what is now known as API management and enabled simple
administration of the service/API. The gateways could also be specialized to focus on API
management-specific capabilities, such as traffic management (rate/throughput limiting),
encryption/decryption, redaction, and security patterns. The gateways could also be supplemented
with portals that describe the available APIs which enable self-subscription to use the APIs along
with provisioning analytics for both users and providers of the APIs.
While logically, the provisioning of APIs
outside the enterprise looks like just an
extension of the ESB pattern, there are
both significant infrastructural and design
differences between externally facing
APIs and internal services/APIs.
• From an infrastructural point of view,
it is immediately obvious that the APIs
are being used by consumers and
devices that may exist anywhere from
a geographical and network point of
view. As a result, it is necessary to
design the APIs differently to take into
account the bandwidth available and
the capabilities of the devices used
as consumers.
• From a design perspective, we should
not underestimate the difference in
the business objectives of these APIs.
External APIs are much less focused
on re-use, in the way that internal
APIs/ services were in SOA, and more
focused on creating services targeting
specific niches of potential for new
business. Suitably crafted channel
specific APIs provide an enterprise
with the opportunity to radically
broaden the number of innovation
partners that it can work with
(enabling crowd sourcing of new ideas),
Systems
ofRecordEngagement
Applications
Externally exposed services/APIs
Exposure Gateway (external)
Integration Runtime
Exposure Gateway
Internally exposed services/APIs
Scope of the ESB pattern
Asynchronous integration
Request/response integration
Integration runtime
API Gateway
Enterprise API
Public API
Integration
Runtime
Home 15
16
and they play a significant role in the disruption
of industries that is so common today. This
realization caused the birth of what we now call
the API Economy, and it is a well-covered topic
on IBMs “API Economy” blog.
The main takeaway here is that this progression
exacerbated an already growing divide between
the older traditional systems of record that still
perform all the most critical transactions
fundamental to the business, and what became
known as the systems of engagement, where
innovation occurred at a rapid pace, exploring
new ways of interacting with external
consumers. This resulted in
bi-modal IT, where new decentralized,
fast-moving areas of IT needed much greater
agility in their development and led to the
invention of new ways of building applications
using, for example, microservices architecture.
The rise of lightweight
runtimes
Earlier, we covered the challenges of the heavily
centralized integration runtime—hard to safely
and quickly make changes without affecting
other integrations, expensive and complex to
scale, etc.
Microservices architecture:
A more agile and scalable
way to build applications
In order to meet the constant need for IT to
improve agility and scalability, a next logical
step in application development was to break
up applications into smaller pieces and run
them completely independently of one
another. Eventually, these pieces became
small enough that they deserved a name,
and they were termed microservices.
Sound familiar? It should. These were exactly
the same challenges that application
development teams were facing at the same
time: bloated, complex application servers that
contained too much interconnected and cross-
dependent code, on a fragile cumbersome
topology that was hard to replicate or scale.
Ultimately, it was this common paradigm that
led to the emergence of the principles of
microservices architecture. As lightweight
runtimes and application servers such as Node.
js and IBM WAS Liberty were introduced—
runtimes that started in seconds and had tiny
footprints—it became easier to run them on
smaller virtual machines, and then eventually
within container technologies such as Docker.
If you take a closer look at microservices
concepts, you will see that it has a much
broader intent than simply breaking
things up into smaller pieces. There are
implications for architecture, process,
organization, and more—all focused on
enabling organizations to better use
cloud-native technology advances to
increase their pace of innovation.
However, focusing back on the core
technological difference, these small
independent microservices components
can be changed in isolation to create
greater agility, scaled individually to
make better use of cloud-native
infrastructure, and managed more
ruthlessly to provide the resilience
required by 24/7 online applications.
Figure 3 below visualizes the
microservices architecture we’ve just
described.
1616Home
17
In theory, these principles could be used anywhere. Where we see them most commonly is in the
systems of engagement layer, where greater agility is essential. However, they could also be used
to improve the agility, scalability, and resilience of a system of record—or indeed anywhere else in
the architecture, as you will see as we discuss agile integration architecture in more depth.
Without question, microservices principles can offer significant benefits under the right
circumstances. However, choosing the right time to use these techniques is critical, and getting
the design of highly distributed components correct is not a trivial endeavor.
Not least is your challenge of deciding the
shape and size of your microservices
components. Add to that equally critical
design choices around the extent to
which you decouple them. You need to
constantly balance practical reality with
aspirations for microservices-related
benefits. In short, your microservices-
based application is only as agile and
scalable as your design is good, and your
methodology is mature.
Systems
ofRecord
Integration Runtime
Exposure Gateway
Microservice application boundary
Asynchronous integration
Request/response integration
Integration runtime
API Gateway
Lightweight language runtime
Enterprise API
Public API
Integration
Runtime
Engagement
Applications
Microservice
Applications
Microservice
Applications
Externally exposed services/APIs
Exposure Gateway (external)
Figure 3. Microservices architecture: A new way to build applications
1717Home
1818
Microservices inevitably gets compared to SOA in architectural discussions, not least because they
share many words in common. However, as you will see, this comparison is misleading at best, since
the terms apply to two very different scopes. Figure 4 demonstrates how microservices are
application-scoped within the SOA enterprise service bus.
Service-oriented architecture is an enterprise-wide initiative to create re-usable, synchronously
available services and APIs, such that new applications can be created
more quickly incorporating data from other systems.
Microservices architecture, on the other hand, is an option for how you might choose to write an
individual application in a way that makes that application more agile, scalable, and resilient.
It’s critical to recognize this difference in
scope, since some of the core principles
of each approach could be completely
incompatible if applied at the same
scope. For example:
A comparison of SOA and microservice architecture
Figure 4. SOA is enterprise scoped, microservices architecture is application scoped
Home
Service-oriented
architecture is an
enterprise-wide
initiative.
Microservices
architecture is an
option for how you
might choose to write
an
individual application.
19
So, in summary, SOA has an enterprise scope
and looks at how integration occurs between
applications. Microservices architecture has
an application scope, dealing with how the
internals of an application are built. This is a
relatively swift explanation of a much more
complex debate, which is thoroughly explored
in a separate article:
http://ibm.biz/MicroservicesVsSoa
However, we have enough of the key concepts
to now delve into the various aspects of agile
integration architecture.
• Re-use: In SOA, re-use of integrations is
the primary goal, and at an enterprise
level, striving for some level of re-use is
essential. In microservices architecture,
creating a microservices component that is
re-used at runtime throughout an
application results in dependencies that
reduce agility and resilience. Microservices
components generally prefer to re-use
code by copy and accept data duplication
to help improve decoupling between one
another.
• Synchronous calls: The re-usable services
in SOA are available across the enterprise
using predominantly synchronous
protocols such as RESTful APIs. However,
within a microservice application,
synchronous calls introduce real-time
dependencies, resulting in a loss of
resilience, and also latency, which impacts
performance. Within a microservices
application, interaction patterns based on
asynchronous communication are
preferred, such as event sourcing where a
publish subscribe model is used to enable
a microservices component to remain up
to date on changes happening to the data
in another component.
• Data duplication: A clear aim of providing
services in an SOA is for all applications
to synchronously get hold of, and make
changes to, data directly at its primary
source, which reduces the need to
maintain complex data synchronization
patterns. In microservices applications,
each microservice ideally has local access
to all the data it needs to ensure its
independence from other microservices,
and indeed from other applications—even
if this means some duplication of data in
other systems. Of course, this duplication
adds complexity, so it needs to be
balanced against the gains in agility and
performance, but this is accepted as a
reality of microservices design.
Home 19
20
Chapter 3: The case for agile integration architecture
Home 20
Let’s briefly explore why
microservices concepts
have become so popular
in the application space.
We can then quickly see
how those principles can
be applied to the
modernization of
integration architecture.
Microservices architecture
Microservices architecture is an alternative approach to structuring applications. Rather
than an application being a large silo of code all running on the same server, an application
is designed as a collection of smaller, completely independently running components.
This enables the following benefits, which are also illustrated in Figure 5 below:
Figure 5 Comparison of siloed and microservices-based applications
They are small enough to be understood
completely in isolation and changed independently
greater Agility
Their resource usage can be truly tied to the
business model
elastic Scalability
With suitable decoupling, changes to one
microservice do not affect others at runtime
discrete Resilience
21
Microservice components are often made from
pure language runtimes such as Node.js or Java,
but equally they can be made from any suitably
lightweight runtime. The key requirements
include that they have a simple dependency-
free installation, file system based deploy, start/
stop in seconds and have strong support for
container-based infrastructure.
Microservices architectures
lead to the primary benefits
of greater agility, elastic
scalability, and discrete
resilience.
As with any new approach there are challenges
too, some obvious, and some more subtle.
Microservices are a radically different approach
to building applications. Let’s have a brief look
at some of the considerations:
• Greater overall complexity: Although the
individual components are potentially simpler,
and as such they are easier to change and
scale, the overall application is inevitably a
collection of highly distributed individual parts.
• Learning curve on cloud-native
infrastructure: To manage the increased
number of components, new technologies and
frameworks are required including service
discovery, workload orchestration, container
management, logging frameworks and more.
Platforms are available to make this easier, but
it is still a learning curve.
• Different design paradigms:
The microservices application architecture
requires fundamentally different approaches
to design. For example, using eventual
consistency rather than transactional
interactions, or the subtleties of asynchronous
communication to truly decouple components.
• DevOps maturity: Microservices require a
mature delivery capability. Continuous
integration, deployment, and fully automated
Microservices architecture enables developers
to make better use of cloud native infrastructure
and manage components more ruthlessly,
providing the resilience and scalability required
by 24/7 online applications. It also improves
ownership in line with DevOps practices whereby
a team can truly take responsibility for a whole
microservice component throughout its lifecycle
and hence make changes at a higher velocity.
tests are a must. The developers who
write code must be responsible for it in
production. Build and deployment chains
need significant changes to provide the
right separation of concerns for a
microservices environment.
Microservices architecture is not the
solution to every problem. Since there is
an overhead of complexity with the
microservices approach, it is critical to
ensure the benefits outlined above
outweigh the extra complexity. However,
if applied judiciously it can provide order
of magnitude benefits that would be hard
to achieve any other way.
Microservices architecture discussions are
often heavily focused on alternate ways
to build applications, but the core ideas
behind it are relevant to all software
components, including integration.
Home 21
22
If what we’ve learned from microservices
architecture means it sometimes makes sense
to build applications in a more granular
lightweight fashion, why shouldn’t we apply
that to integration to?
Integration is typically deployed in a very siloed
and centralized fashion such as the ESB pattern.
What would it look like if we were to re-visit that
in the light of microservices architecture?
It is this alternative approach that we call
“agile integration architecture”.
The centralized deployment of
integration hub or enterprise services
bus (ESB) patterns where all integrations
are deployed to a single heavily nurtured
(HA) pair of integration servers has been
shown to introduce a bottleneck for
projects. Any deployment to the shared
servers runs the risk of destabilizing
existing critical interfaces. No individual
project can choose to upgrade the
version of the integration middleware
to gain access to new features.
We could break up the enterprise-wide
ESB component into smaller more
manageable and dedicated pieces.
Perhaps in some cases we can even get
down to one runtime for each interface
we expose.
Agile integration architecture Aspect 1:
Fine-grained integration
deployment
There are three related, but separate aspects
to agile integration architecture:
• Aspect 1:
Fine-grained integration
deployment.
What might we gain by breaking out the
integrations in the siloed ESB into separate
runtimes?
• Aspect 2:
Decentralized integration
ownership.
How should we adjust the organizational
structure to better leverage a more
fine-grained approach?
• Aspect 3:
Cloud native integration
infrastructure.
What further benefits could we gain by a
fully cloud-native approach to integration.
Although these each have dedicated chapters,
it’s worth taking the time to summarize them
at a conceptual level here.
Home 22
Agile integration architecture
is defined as
“a container-based,
decentralized and
microservices-aligned
architecture for integration
solutions”.
23
These “fine-grained integration deployment” patterns provide specialized, right-sized containers,
offering improved agility, scalability and resilience, and look very different to the centralized ESB
patterns of the past. Figure 6 demonstrates in simple terms how a centralized ESB differs from
fine-grained integration deployment.B patterns of the past.
Fine-grained integration deployment draws on the benefits of a microservices architecture we listed in
the last section: agility, scalability and resilience:
Different teams can work on integrations
independently without deferring to a
centralized group or infrastructure that
can quickly become a bottleneck.
Individual integration flows can be
changed, rebuilt, and deployed
independently of other flows, enabling
safer application of changes and
maximizing speed to production.
Individual flows can be scaled on their
own, allowing you to take advantage of
efficient elastic scaling of cloud
infrastructures.
Home 23
Figure 6: Simplistic comparison of a centralized ESB to fine-grained integration deployment
Consumers
Centralized ESB Fine-grained integration
deployment
Integrations
Providers
Agility:
Scalability:
Resilience:
Isolated integration flows that are
deployed in separate containers cannot
affect one another by stealing shared
resources, such as memory,
connections, or CPU.
24Home 24
Breaking the single ESB runtime up into many
separate runtimes, each containing just a few
integrations is explored in detail in “Chapter 4:
Aspect 1: Fine grained integration deployment”
A significant challenge faced by service-
oriented architecture was the way that it
tended to force the creation of central
integration teams, and infrastructure to
create the service layer.
This created ongoing friction in the pace at
which projects could run since they always
had the central integration team as a
dependency. The central team knew their
integration technology well, but often didn’t
understand the applications they were
integrating, so translating requirements
could be slow and error prone.
Many organizations would have preferred
the application teams own the creation of their
own services, but the technology and
infrastructure of the time didn’t enable that.
Aspect 2:
Decentralized
integration ownership
The move to fine-grained integration
deployment opens a door such that ownership
of the creation and maintenance of integrations
can be distributed.
It’s not unreasonable for business application
teams to take on integration work, streamlining
the implementation of new capabilities. This shift
is discussed in more depth in “Chapter 5:
Aspect 2: Decentralized integration ownership”.
25Home 25
Clearly, agile integration architecture requires
that the integration topology be deployed very
differently. A key aspect of that is a modern
integration runtime that can be run in a
container-based environment and is well suited
to cloud-native deployment techniques. Modern
integration runtimes are almost unrecognizable
from their historical peers. Let’s have a look at
some of those differences:
• Fast lightweight runtime: They run in
containers such as Docker and are
sufficiently lightweight that they can be
started and stopped in seconds and can be
easily administered by orchestration
frameworks such as Kubernetes.
• Dependency free: They no longer require
databases or message queues, although
obviously, they are very adept at
connecting to them if they need to.
• File system based installation:
They can be installed simply by laying
their binaries out on a file system and
starting them up-ideal for the layered
file systems of Docker images.
• DevOps tooling support: The runtime
should be continuous integration and
deployment-ready. Script and property
file-based install, build, deploy, and
configuration to enable “infrastructure
as code” practices. Template scripts for
standard build and deploy tools should
be provided to accelerate inclusion into
DevOps pipelines.
• API-first: The primary communication
protocol should be RESTful APIs.
Exposing integrations as RESTful APIs
should be trivial and based upon
common conventions such as the Open
API specification. Calling downstream
RESTful APis should be equally trivial,
including discovery via definition files.
• Digital connectivity: In addition to
the rich enterprise connectivity that
has always been provided by integration
runtimes, they must also connect to
modern resources.
How has the modern
integration runtime changed
to accommodate agile
integration architecture?
Integration runtimes have changed dramatically
in recent years. So much so that these
lightweight runtimes can be used in truly cloud-
native ways. By this we are referring to their
ability to hand off the burden of many of their
previously proprietary mechanisms for cluster
management, scaling, availability and to the
cloud platform in which they are running.
This entails a lot more than just running them in
a containerized environment. It means they
have to be able to function as “cattle not pets,”
making best use of the orchestration
capabilities such as Kubernetes and many other
common cloud standard frameworks.
We expand considerably on the concepts in
“Chapter 6: Aspect 3: Cloud native integration
infrastructure”.
Aspect 3:
Cloud-native
integration infrastructure
26Home 26
Modern integration runtimes are well suited to the three aspects of agile integration architecture:
fine-grained deployment, decentralized ownership, and true cloud-native infrastructure. Before we
turn our attention to these aspects in more detail, we will take a more detailed look at the SOA
pattern for those who may be less familiar with it, and explore where organizations have struggled
to reach the potential they sought.
For example, NoSQL databases
(MongoDb and Cloudant etc.), and
Messaging services such as Kafka.
Furthermore, they need access to a rich
catalogue of application intelligent
connectors for SaaS (software as a service)
applications such as Salesforce.
• Continuous delivery: Continuous delivery
is enabled by command-line interfaces and
template scripts that mesh into standard
DevOps pipeline tools. This further reduces
the knowledge required to implement
interfaces and increases the pace of delivery.
• Enhanced tooling: Enhanced tooling for
integration means most interfaces can be
built by configuration alone, often by
individuals with no integration background.
With the addition of templates for common
integration patterns, integration best practices
are burned into the tooling, further
simplifying the tasks. Deep integration
specialists are less often required, and some
integration can potentially be taken on by
application teams as we will see in the next
section on decentralized integration.
27
Section 2:
Exploring agile integration
architecture in detail
If it makes sense to build applications in a more granular fashion, why shouldn’t we apply this
idea to integration, too? We could break up the enterprise-wide centralized ESB component into
smaller, more manageable, dedicated components. Perhaps even down to one integration
runtime for each interface we expose, although in many cases it would be sufficient to bunch
the integrations as a handful per component.
Chapter 4: Aspect 1:
Fine-grained integration deployment
Breaking up the centralized ESB
If the large centralized ESB pattern containing all the integrations for the enterprise is reducing
agility for all the reasons noted previously, then why not break it up into smaller pieces? This
section explores why and how we might go about doing that.
Now that you have been introduced to the
concept of agile integration architecture we are
going to dive into greater detail on its three
main aspects, looking at their characteristics
and presenting a real-life scenario.
- Chapter 4:
Aspect 1: Fine-grained integration
deployment
Addresses the benefits an
organization gains by breaking up the
centralized ESB
- Chapter 5:
Aspect 2: Decentralized integration
ownership
Discusses how shifting from a
centralized governance and development
practice creates new levels of agility and
innovation.
- Chapter 6:
Aspect 3: Cloud native integration
infrastructure
Provides a description of how
adopting key technologies and practices from
the cloud native application discipline can
provide similar benefits to application integration.
Home 27
28
The heavily centralized ESB pattern can be broken up in this way, and so can the older hub and spoke
pattern. This makes each individual integration easier to change independently, and improves agility,
scaling, and resilience.
Figure 7 shows the result of breaking up the ESB into separate, independently maintainable and
scalable components.
Home 28
Figure 7: Breaking up the centralized ESB into independently maintainable and scalable pieces
Fine grained integration
deployment allows you
to make a change to an
individual integration
with complete
confidence that you will
not introduce any
instability into the
environment
Systems
ofRecordEngagement
Applications
Microservice
Applications
Microservice
Applications
Externally exposed services/APIs
Exposure Gateway (external)
“Fine-grained integration
deployment”
Microservice application boundary
Asynchronous integration
Request/response integration
Lightweight integration runtime
API Gateway
Lightweight language runtime
Enterprise API
Public API
29Home 29
To be able to be used for fine-grained
deployment, what characteristics does a modern
integration runtime need?
• Fast, light integration runtime.
The actual runtime is slim, dispensing with
hard dependencies on other components
such as databases for configuration, or
being fundamentally reliant on a specific
message queuing capability. The runtime
itself can now be stopped and started in
seconds, yet none of its rich functionality
has been sacrificed. It is totally reasonable
to consider deploying a small number of
integrations on a runtime like this and then
running them independently rather than
placing all integration on a centralized
single topology.
Installation is equally minimalist
and straightforward requiring little
more than laying binaries out on a
file system.
• Virtualization and containerization.
The runtime should actively support
containerization technologies such
as Docker and container
orchestration capabilities such as
Kubernetes, enabling non-functional
characteristics such as high
availability and elastic scalability to
be managed in the standardized
ways used by other digital
generation runtimes, rather than
relying on proprietary topologies
and technology. This enables new
runtimes to be introduced
administered and scaled in
well-known ways without requiring
proprietary expertise.
We typically call this pattern fine-grained
integration deployment (and a key aspect of
agile integration architecture), to differentiate
it from more purist microservices application
architectures. We also want to mark a distinction
from the ESB term, which is strongly associated
with the more cumbersome centralized
integration architecture.
This approach allows you to make a change to an
individual integration with complete confidence
that you will not introduce any instability into the
environment on which the other integrations are
running. You could choose to use a different
version of the integration runtime, perhaps to
take advantage of new features, without forcing
a risky upgrade to all other integrations. You
could scale up one integration completely
independently of the others, making extremely
efficient use of infrastructure, especially when
using cloud-based models.
There are of course considerations to be worked
through with this approach, such as the
increased complexity with more moving parts.
Also, although the above could be achieved
using virtual machine technology, it is likely that
the long-term benefits would be greater if you
were to use containers such as Docker, and
orchestration mechanisms such as Kubernetes.
Introducing new technologies to the integration
team can add a learning curve. However, these
are the same challenges that an enterprise
would already be facing if they were exploring
microservices architecture in other areas, so
that expertise may already exist within the
organization.
What characteristics does
the integration runtime
need?
30
• Stateless
The runtime needs to able to run
statelessly. In other words, runtimes
should not be dependent on, or even
aware of one another. As such they can be
added and taken away from a cluster freely
and new versions of interfaces can be
deployed easily. This enables the container
orchestration to manage scaling, rolling
deployments, A/B testing, canary tests and
more with no proprietary knowledge of the
underlying integration runtime. This stateless
aspect is essential if there are going to be
more runtimes to manage in total.
• Cloud-first
It should be possible to immediately explore a
deployment without the need to install any
local infrastructure. Examples include providing
a cloud based managed service whereby
integrations can be immediately deployed,
with a low entry cost, and an elastic cost model.
Quick starts should be available for simple
creation of deployment environments on
major cloud vendors’ infrastructures.
This provides a taste of how different the integration runtimes of today are from those of the past.
IBM App Connect Enterprise (formerly known as IBM Integration Bus) is a good example of
such a runtime. Integration runtimes are not in themselves an ESB; ESB is just one of the
patterns they can be used for. They are used in a variety of other architectural patterns too,
and increasingly in fine-grained integration deployment.
A glaring question then remains: how granular should the decomposition of the integration flows
be? Although you could potentially separate each integration into a separate container, it is
unlikely that such a purist approach would make sense. The real goal is simply to ensure that
unrelated integrations are not housed together. That is, a middle ground with containers that
group related integrations together (as shown in Figure 8) can be sufficient to gain many of the
benefits that were described previously.
Granularity
Figure 8: Related integrations grouped together can lead to many benefits.
Home 30
31
You target the integrations that need the most
independence and break them out on their own.
On the flip side, keep together flows that, for
example, share a common data model for
cross-compatibility. In a situation where
changes to one integration must result in
changes to all related integrations, the benefits
of separation may not be so relevant.
For example, where any change to a shared data
model must be performed on all related
integrations, and they would all need to be
regression tested anyway, having them as
separate entities may only be of minimal value.
However, if one of those related integrations has
a very different scaling profile, there might be a
case for breaking it out on its own. It’s clear that
there will always be a mixture of concerns to
consider when assessing granularity.
Fine-grained deployment allows you to reap some of the benefits of microservices architecture
in your integration layer enabling greater agility because of infrastructural decoupled
components, elastic scaling of individual integrations and an inherent improvement in
resilience from the greater isolation.
Conclusion on fine-grained integration deployment
Home 31
The right level of granularity
is to allow decomposition of
the integration flows to the
point where unrelated
integrations are not housed
together.
32
The problem
While this seemed like a reasonable approach,
it created issues with the application
development team. Adding one element to
the model took, at best, two weeks. The
application team had to submit the request,
then attend the CoE meeting, then if agreed
to that model would be released the following
week. From there, the application dev team
would get the model which would contain their
change (and any other change any other team
had submitted for between their last version
and the current version). Then would be able
to start work implementing business code.
After some time, these two week procedural
delays began to add up. From this point we
need to strongly consider if the value of the
highly-governed, enterprise message model is
worth that investment, and if the consistency
gained through the CoE team is worth the
delays. On the benefit side the CoE team can
now create and maintain standards and keep
a level of consistency, on the con side that
consistency is incurring a penalty if we look
at it from the lens of time to market.
A real-life scenario The solution
Let’s examine an organization where an agile
methodology was adopted, a cloud had been
chosen but who still had a centralized team that
maintained an enterprise-wide data model and
ESB. This team realized that they struggled with
even a simple change of adding a new element
to the enterprise message model and the
associated exposed endpoint.
The team that owned the model took requests
from application development teams. Since it
wasn’t reasonable for the modelling CoE (Center
of Excellence) team to take requests constantly,
they met once a week to talk about changes and
determine if the changes would be agreed to.
To reduce change frequency, the model was
released once a week with whatever updates
had been accepted by the CoE. After the model
was changed the ESB team would take action
on any related changes. Because of the
enterprise nature of the ESB this would then
again have to be coordinated with other builds,
other application needs and releases.
The solution was to break the data
model into bounded contexts based on
business focus areas. Furthermore the
integrations were divided up into groups
based on those bounded contexts too,
each running on separate infrastructure.
This allowed each data model and its
associated integrations to evolve
independently as required yet still
providing consistency across a now
more narrow bounded context. It is
worth noting that although this provided
improved autonomy with regard to data
model changes, the integration team
were still separate from the application
teams, creating scheduling and
requirements handover latencies.
In the next section, we will discuss the
importance of exploring changes to the
organizational boundaries too.
Lessons Learned
Home 32
33
We can take what we’ve done in “Aspect 1: Fine
grained integration deployment” a step further.
If you have broken up the integrations into
separate decoupled pieces, you may opt to
distribute those pieces differently from an
ownership and administration point of view as well.
The microservices approach encourages teams
to gain increasing autonomy such that they can
make changes confidently at a more rapid pace.
When applied to integration, that means
allowing the creation and maintenance of
integration artifacts to be owned directly by
application teams rather than by a single
separate centralized team. This distribution of
ownership is often referred to under the broader
topic of “decentralization” which is a common
theme in microservices architecture.
It is extremely important to recognize that
decentralization is a significant change for most
organizations. For some, it may be too different
to take on board and they may have valid
reasons to remain completely centrally
organized. For large organizations, it is unlikely
it will happen consistently across all domains.
It is much more likely that only specific pockets
of the organization will move to this approach -
where it suits them culturally and helps them
meet their business objectives.
We’ll discuss what effect that shift would have
on an organization, and some of the pros and
cons of decentralization.
In the strongly layered architecture described
in “Chapter 3: The journey so far:
SOA, ESBs and APIs”, technology islands such
as integration had their own dedicated, and
often centralized teams. Often referred to as
the “ESB team” or the “SOA team”, they owned
the integration infrastructure, and the creation
and maintenance of everything on it.
We could debate Conway’s Law as to whether
the architecture created the separate team or
the other way around, but the more important
point is that the technology restriction of
needing a single integration infrastructure has
been lifted.
We can now break integrations out into
separate decoupled (containerized) pieces,
each carrying all the dependencies they need,
as demonstrated in Figure 9 below.
Chapter 5: Aspect 2: Decentralized integration
ownership
Decentralizing integration
ownership
Home 33
34Home 34
Figure 9: Decentralizing integration to the application teams
Technologically, there may be little difference between this diagram and the preceding fine-grained
integration diagram in the previous chapter. All the same integrations are present, they’re just in a
different place on the diagram. What’s changed is who owns the integration components. Could you
have the application teams take on integration themselves? Could they own the creation and
maintenance of the integrations that belong to their applications? This is feasible because not only
have most integration runtimes become more lightweight, but they have also become significantly
easier to use. You no longer need to be a deep integration specialist to use a good modern
integration runtime. It’s perfectly reasonable that an application developer could make good use
of an integration runtime.
You’ll notice we’ve also shown the
decentralization of the gateways to
denote that the administration of the
API’s exposure moves to the application
teams as well.
There are many potential advantages to
this decentralized integration approach:
• Expertise: A common challenge for
separate SOA teams was that they
didn’t understand the applications
they were offering through services.
The application teams know the data
structures of their own applications
better than anyone.
• Optimization: Fewer teams will be
involved in the end-to-end
implementation of a solution,
significantly reducing the cross-team
chatter, project delivery timeframe,
and inevitable waterfall development
that typically occurs in these cases.
• Empowerment: Governance teams
were viewed as bottle necks or
checkpoints that had to be passed.
There were artificial delays that were
added to document, review then
approve solutions.
Microservice application boundary
Asynchronous integration
Request/response integration
Lightweight integration runtime
API Gateway
Lightweight language runtime
Enterprise API
Public API
Systems
ofRecordEngagement
Applications
Microservice
Applications
Externally exposed services/APIs
Exposure Gateway (external)
35
The goal was to create consistency, the con is
that to create that consistency took time. The
fundamental question is “does the consistency
justify the additional time?” In decentralization,
the team is empowered to implement the
governance policies that are appropriate to
their scope.
Let’s just reinforce that point we made in the
introduction of this chapter. While
decentralization of integration offers potential
unique benefits, especially in terms of overall
agility, it is a significant departure from the way
many organizations are structured today. The
pros and cons need to be weighted carefully, and
it may be that a blended approach where only
some parts of the organization take on this
approach is more achievable.
To re-iterate, decentralized integration is
primarily an organizational change, not a
technical one. But does decentralized integration
imply an infrastructure change? Possibly, but
not necessarily.
The move toward decentralized ownership of
integrations and their exposure does not
necessarily imply a decentralized
infrastructure. While each application team
clearly could have its own gateways and
container orchestration platforms, this is not a
given. The important thing is that they can
work autonomously.
API management is very commonly
implemented in this way: with a shared
infrastructure (an HA pair of gateways
and a single installation of the API
management components), but with
each application team directly
administering their own APIs as if they
had their own individual infrastructure.
The same can be done with the
integration runtimes by having a
centralized container orchestration
platform on which they can be deployed
but giving application teams the ability
to deploy their own containers
independently of other teams.
Does decentralized
integration also mean
decentralized infrastructure
Home 35
Decentralized integration
increases project expertise,
focus and team
empowerment.
36
In the following Figure 10, we show how in a traditional SOA architecture, people were
aligned based to their technology stack.
It is worth noting that this decentralized
approach is particularly powerful when moving
to the cloud. Integration is already implemented
in a cloud-friendly way and aligned with systems
of record. Integrations relating to the application
have been separated out from other unrelated
integrations so they can move cleanly with the
application. Furthermore, container-based
infrastructures, if designed using cloud-ready
principles and an infrastructure-as-code
approach, are much more portable to cloud and
make better use of cloud-based scaling and cost
models. With the integration also owned by the
application team, it can be effectively packaged
as part of the application itself.
In short, decentralized integration significantly
improves your cloud readiness.
We are now a very long way from the centralized
ESB pattern—indeed, the term makes no sense
in relation to this fully decentralized pattern—
but we’re still achieving the same intent of
making application data and functions available
for re-use by other applications across and even
beyond the enterprise.
Benefits for cloud Traditional centralized technology-based organization
Home 36
Figure 10: Alignment of IT staff according to technology stack in an ESB environment.
Engagement
Applications
Microservice
Applications
Externally exposed services/APIs
Exposure Gateway (external)
Exposure Gateway
Systems
ofRecord
Integration Runtime
Integration Runtime
Microservice application boundary
Asynchronous integration
Request/response integration
Lightweight integration runtime
API Gateway
Lightweight language runtime
Enterprise API
Public API
Scope of the ESB pattern
37
A high level organizational chart would look
something like this:
• A front-end team, which would be focused
on end user’s experience and focused on
creating UIs.
• An ESB team, which would be focused on
looking at existing assets that could be
provided as enterprise assets. This team
would also be focused on creating the
services that would support the UIs from
the front-end team.
• A back-end team, which would focus on the
implementation of the enterprise assets
surfaced through the ESB. There would be
many teams here working on many different
technologies. Some might be able to provide
SOAP interfaces created in Java, some would
provide COBOL copybooks delivered over MQ,
yet others would create SOAP services
exposed by the mainframe and so on.
This is an organizational structure with an
enterprise focus which allows a company to
rationalize its assets and enforce standards
across a large variety of assets. The downside
of this focus is that time to market for an
individual project was compromised for the
good of the enterprise.
A simple example of this would be a front-end
team wanting to add a single new element to
their screen. If that element doesn’t exist on an
existing SOAP service in the ESB then the ESB
team would have to get engaged. Then,
predictably, this would also impact the back-end
team who would also have to make a change.
Now, generally speaking, the code changes at
each level were simple and straightforward, so
that wasn’t the problem.
The problem was allocating the time for
developers and testers to work on it. The
project managers would have to get involved
to figure out who on their teams had capacity
to add the new element, and how to schedule
the push into the various environments. Now,
if we scale this out we also have competing
priorities. Each project and each new element
would have to be vetted and prioritized, and all
this is what took the time. So now we are in a
situation where there is a lot of overhead, in
terms of time, for a very simple and
straightforward change.
The question is whether the benefits that we
get by doing governance, and creating common
interfaces is worth the price we pay for the
operational challenges? In the modern digital
world of fast-paced innovation we must think
of a new way to enforce standards while
allowing teams to reduce their time to market.
We’re trying to reduce the time between
the business ask and production
implementation, knowing that we may
rethink and reconsider how we implement
the governance processes that were once
in place. Let’s now consider the concept of
microservices and that we’ve broken our
technical assets down into smaller pieces.
If we don’t consider reorganizing, we
might actually make it worse! We’ll
introduce even more hand-offs as the
lines of what is an application and who
owns what begin to blur. We need to
re-think how we align people to technical
assets. In Figure 11, give you a preview of
what that new alignment might look like.
Instead of people being centrally aligned
to the area of the architecture they work
on, they’ve been decentralized, and
aligned to business domains. In the past,
we had a front-end team, services teams,
back-end teams and so on; now we have
a number of business teams. For example,
an Account team which works on anything
related to accounts regardless whether or
not the accounts involve a REST API,
a microservice, or a user interface.
Moving to a decentralized,
business-focused team
structure
Home 37
38
The teams need to have cross-cutting skills since their goal is to deliver business results, not
technology. To create that diverse skill set, it’s natural to start by picking one person from the
old ESB team, one person from the old front-end team, and another from the back-end team.
It is very important to note that this does not need to be a big bang re-org across the entire
enterprise, this can be done application by application, and piece by piece.
Home 38
The concept of “big bangs generally lead
to big disasters” isn’t only applicable to
code or applications. It’s applicable to
organizational structure changes as well.
An organization’s landscape will be a
complex heterogeneous blend of new
and old. It may have a “move to cloud”
strategy, yet it will also contain stable
heritage assets. The organizational
structure will continue to reflect that
mixture. Few large enterprises will have
the luxury of shifting entirely to a
decentralized organizational structure,
nor would they be wise to do so.
For example, if there is a stable
application and there is nothing major
on the road map for that application, it
wouldn’t make sense to decompose that
application into microservices. Just as
that wouldn’t make sense, it also would
not make sense to reorganize the team
working on that application.
Decentralization need only occur where
the autonomy it brings is required by the
organization, to enable rapid innovation
in a particular area.
Big bangs generally lead
to big disasters
Figure 11: Decentralized IT staff structures.
Externally exposed services/APIs
Exposure Gateway (external)
Microservices
application
Engagement
applications
Systems
ofRecord
39
Now let’s consider what this change does to an
individual and what they’re concerned about.
The first thing you’ll notice about the next
diagram is that it shows both old and new
architectural styles together. This is the reality
for most organizations. There will be many
existing systems that are older, more resistant
to change, yet critical to the business. Whilst
some of those may be partially or even
completely re-engineered, or replaced, many
will remain for a long time to come. In addition,
there is a new wave of applications being built
for agility and innovation using architectures
such as microservices. There will be new
cloud-based software-as-a-service applications
being added to the mix too.
If we look into the concerns and motivations of the people involved, they fall into two very
different groups, illustrated in Figure 12.
Home 39
We certainly do not anticipate reorganization
at a company level in its entirety overnight.
The point here is more that as the architecture
evolves, so should the team structure working
on those applications, and indeed the
integration between them. If the architecture
for an application is not changing and is not
foreseen to change there is no need reorganize
the people working on that application.
Prioritizing Project
Delivery First
Re-use 
Quality 
Stability 
Support 
Monitoring
Governance
Performance
Fixed requirements
Agility
Velocity
Autonomy Freemium
Cloud native
Vendor agnostic
Developer is king
Rapid prototyping
Short learning curve
What’s its track
recor Is the vendor
trustworthy Will it
serve me long term
What do the
analysts think of it
Could I get sacked
for a risky choice
Can I start small
Can it help me today
What do my peers
think of it Does it
have an active
community Are my
skills relevant to
my peers
A developer of traditional applications cares
about stability and generating code for
re-use and doing a large amount of up-front
due diligence. The agile teams on the other
hand have shifted to a delivery focus. Now,
instead of thinking about the integrity of the
enterprise architecture first and being willing
to compromise on the individual delivery
timelines, they’re now thinking about
delivery first and willing to compromise on
consistency.
Agile teams are more
concerned with the
project delivery than
they are with the
enterprise architecture
integrity.
Figure 12: Traditional developers versus a gile teams
Engagement
applications
Traditional
Integration
Systems
ofRecord
SaaS
Application
Microservice
applications
Engagement
applications
Integration Runtime
40Home 40
Let’s view these two conflicting priorities as two
ends of a pendulum. There are negatives at the
extreme end on both sides. On one side, we
have analysis paralysis where all we’re doing is
talking and thinking about what we should be
doing, on the other side we have the wild-wild-west
were all we’re doing is blindly writing code with
no direction or thought towards the longer-term
picture. Neither side is correct, and both have
grave consequences if allowed to slip too far to
one extreme or the other. The question still
remains: “If I’ve broken my teams into business
domains and they’re enabled and focused on
delivery, how do I get some level of consistency
across all the teams? How do I prevent duplicate
effort? How do I gain some semblance of
consistency and control while still enabling
speed to production?”
The answer is to also consider the architecture
role. In the SOA model the architecture team
would sit in an ivory tower and make decisions.
In the new world, the architects have an evolved
role--practicing architects. An example is
depicted in Figure 13.
Evolving the role of the
Architect
Microservice application
Microservice component Microservice component
Guild(s)
Here we have many teams and some of the members of those teams are playing a dual role.
On one side they are expected to be an individual contributor on the team, and on the other
side they sit on a committee (or guild) that rationalizes what everyone is working on. They are
creating common best practices from their work on the ground. They are creating shared
frameworks, and sharing their experiences so that other teams don’t blunder into traps
they’ve already encountered. In the SOA world, it was the goal to stop duplication/enforce
standards before development even started. In this model the teams are empowered, and the
committee or guild’s responsibility is to raise/address and fix cross cutting concerns at the
time of application development.
If there is a downside to decentralization, it may be the question of how to govern the
multitude of different ways that each application team might use the technology – essentially
encouraging standard patterns of use and best practices. Autonomy can lead to divergence.
Figure 13: Practicing architects play a dual role as individual contributors and guild members.
41Home 41
If every application team creates APIs in their
own style and convention, it can become
complex for consumers who want to re-use
those APIs. With SOA, attempts were made to
create rigid standards for every aspect of how
the SOAP protocol would be used, which
inevitably made them harder to understand and
reduced adoption. With RESTful APIs,
it is more common to see convergence on
conventions rather than hard standards. Either
way, the need is clear: Even in decentralized
environments, you still need to find ways to
ensure an appropriate level of commonality
across the enterprise. Of course, if you are
already exploring a microservices-based
approach elsewhere in your enterprise, then you
will be familiar with the challenges of autonomy.
Therefore, the practicing architect is now
responsible for knowing and understanding
what the committee has agreed to, encouraging
their team to follow the governance guidelines,
bringing up cross-cutting concerns that their
team has identified, and sharing what they’re
working on. This role also has the need to be
an individual contributor on one of the teams
so that they feel the pain, or benefit, of the
decisions made by the committee.
The practicing architect
is now responsible
for execution of the
individual team mission
as well as the related
governance
requirements that cut
across the organization.
With the concept of decentralization comes a
natural skepticism over whether the committee
or guild’s influence will be persuasive enough
to enforce the standards they’ve agreed to.
Embedding our “practicing architect” into the
team may not be enough.
Let’s consider how the traditional governance
cycle often occurs. It often involves the
application team working through complex
standards documents, and having meetings
with the governance board prior to the intended
implementation of the application to establish
agreement. Then the application team would
proceed to development activities, normally
beyond the eyes of the governance team.
On or near completion, and close to the agreed
production date, a governance review would occur.
Enforcing governance in a
decentralized structure
Inevitably the proposed project
architecture and the actual resultant
project architecture will be different,
and at times, radically different. Where
the architecture review board had an
objection, there would almost certainly
not be time to resolve it. With the
exception of extreme issues (such as
a critical security flaw), the production
date typically goes ahead, and the
technical debt is added to an
ever-growing backlog.
Clearly the shift we’ve discussed of
placing practicing architects in the teams
encourages alignment. However, the
architect is now under project delivery
pressure which may mean they fall into
the same trap as the teams originally did,
sacrificing alignment to hit deadlines.
What more can we do, via the practicing
architect role, to encourage enforcement
of standards?
The key ingredient for success in modern
agile development environment is
automation: automated build pipelines,
automated testing, automated
deployment and more. The practicing
architect needs to be actively involved
in ways to automate the governance.
42Home 42
This could be anything from automated code
review, to templates for build pipelines, to
standard Helm charts to ensure the target
deployment topologies are homogeneous even
though they are independent. In short, the
focus is on enforcement of standards through
frameworks, templates and automation, rather
than through complex documents, and review
processes. While this idea of getting the
technology to enforce the standards is far from
new, the proliferation of open standards in the
DevOps tool chain and cloud platforms in
general is making it much more achievable.
Let’s start with an example: say that you have
microservices components that issue HTTP
requests. For every HTTP request, you would
like to log in a common format how long that
HTTP transaction took as well as the HTTP
response code. Now, if every microservice did
this differently, there wouldn’t be a unified way
of looking at all traffic. Another role of the
practicing architect is to build helper artifacts
that would then be used by the microservices.
In this way, instead of the governance process
being a gate, it is an accelerator through the
architects being embedded in the teams,
working on code alongside of them. Now the
governance cycle is being done with the teams,
and instead of reviewing documents, the code is
the document and the checkpoint is to make
sure that the common code is being used.
Another dimension to note is that not all teams
are created equally. Some teams are cranking
out code like a factory, others are thinking
ahead to upcoming challenges, and some teams
are a mix of the two. An advanced team that
succeeds in finding a way to automate a
particular governance challenge will be much
more successful evangelists for that mechanism
than any attempt for it to be created by a
separate governance team.
As we are discussing the technical architect it
may seem that too much is being put on their
shoulders. They are responsible for application
delivery, they are responsible to be a part of the
committee discussed in the previous section,
and now we are adding on an additional
element of writing common code that is to be
used by other application development teams.
Is it too much?
A common way to offload some of that work is
to create a dedicated team that is under the
direction of the practicing architect who is
writing and testing this code. The authoring of
the code isn’t a huge challenge, but the testing
of that common code is. The reason for placing
a high value on testing is because of the
potential impact to break or introduce bugs into
all the applications that use that code. For this
reason, extra due diligence and care must be
taken, justifying the investment in the additional
resource allocation.
Clearly our aim should be to ensure that
general developers in the application
teams can focus on writing code that
delivers business value. With the
architects writing or overseeing common
components which naturally enforce the
governance concerns, the application
teams can spend more of their time on
value, and less in governance sessions.
Governance based on complex
documentation and heavy review
procedures are rarely adhered to
consistently, whereas inline tooling based
standardization happens more naturally.
43Home 43
The next and very critical person to consider is
the developer. Developers are now to be
expected and encouraged to be a full stack
developer and solve the business problem with
whatever technology is required. This puts an
incredible strain on each individual developer in
terms of the skills that they must acquire. It’s not
possible for the developer to know the deep ins
and outs of every aspect of each technology, so
something has to give. As we’ll see, what gives is
the infrastructure learning curve – we are finding
better and better ways to make infrastructural
concerns look the same from one product to
another.
In the pre-cloud days, developers had to learn
multiple aspects of each technology as
categorized in Figure 14.
How can we have
multi-skilled developers? Operations Deployment Build
Creation
Security Installation Resource
allocation
Operations Deployment Build
Creation
Security Installation Resource
allocation
Operation eployment Build
Creation
Security Installation Resource
allocation
Operations Deploymen uild
Creation
Security Installation Resource
allocation
Operations Deployment Build
Creation
Security Installation Resource
allocation
Figure 14: Required pre-cloud technology skills.
Decentralization allows developers to focus on what their team is
responsible for; delivering business results by creating artifacts.
s D
t B
Artifact
Artifact
Artifact
Artifact
Artifact
44Home 44
Operations Deployment Build Artefact
Creation
Security Installation Resource
allocation
One day, in an ideal world, the only unique thing about using a technology will be the creation
of the artifact such as the code, or in the case of integration, the mediation flows and data
maps. Everything else will come from the environment. We’ll discuss this infrastructural change
in more depth in the next chapter.
Each column represents a technology and each
row represents an area that the developer had
to know and care about, and understand the
implications of their code on. They had to know
individually for each technology how to install,
how much resources it would need allocated to
it, how to cater for high availability, scaling and
security. How to create the artifacts, how to
compile and build them, where to store them,
how to deploy them, and how to monitor them
at runtime. All this unique and specific to each
technology. It is no wonder that we had
technology specific teams!
However, the common capabilities and
frameworks of typical cloud platforms now
attempt to take care of many of those concerns
in a standardized way. They allow the developer
to focus on what their team is responsible for,
delivering business results by creating artifacts!
Figure 15 shows how decentralization removes
the ‘white noise’.
The grey area represents areas that still need to
be addressed but are now no longer at the front
of the developer’s mind. Standardized
technology such as (Docker) containers, and
orchestration frameworks such as Kubernetes,
and routing frameworks such as Istio, enable
management of runtimes in terms of scaling,
high availability, deployment and so on.
Furthermore, standardization in the way
products present themselves via command line
interfaces, APIs, and simple file system-based
install and deployment mean that standard
tools can be used to install, build and deploy, too.
Figure 15: Required pre-cloud technology skills.
45Home 45
Of course, decentralization isn’t right for every
situation. It may work for some organizations,
or for some parts of some organizations but not
for others. Application teams for older
applications may not have the right skill sets
to take on the integration work. It may be that
integration specialists need to be seeded into
their team. This approach is a tool for
potentially creating greater agility for change
and scaling, but what if the application has been
largely frozen for some time?
At the end of the day, some organizations will
find it more manageable to retain a more
centralized integration team. The approach
should be applied where the benefits are needed
most. That said, this style of decentralized
integration is what many organizations and
indeed application teams have always wanted
to do, but they may have had to overcome
certain technological barriers first.
The core concept is to focus on delivering
business value and a shift from a focus on the
enterprise to a focus on the developer. This
concept has in part manifested itself by the
movement from centralized teams into more
business specific ones, but also by more subtle
changes such as the role of a practicing architect.
This concept is also rooted in actual technology improvements that are taking concerns away
from the developer and doing those uniformly through the facilities of the cloud platform.
As ever, we can refer right back to Conway’s Law (circa 1967) - if we’re changing the way we
architect systems and we want it to stick, we also need to change the organizational structure.
Conclusions on decentralized
integration ownership
46
The problem
The main problem was lack of end state vision.
Because each piece of work was taken
independently teams often did the minimum amount
of work to accomplish the business objective. The
main motivators for each team were risk avoidance
and drive to meet project deadlines – and a desire
not to break any existing functionality. Since each
team had little experience with the code they needed
to change, they began making tactical decisions to
lower risk.
Developers were afraid to break currently working
functionality. As they began new work, they would
work around code that was authored from another
team. Therefore, all new code was appended to
existing code. The microservices continued growing
and growing over time, which then resulted in the
microservices not being so micro.
This lead to technical debt piling up. This technical
debt was not apparent over the first few releases,
but then, 5 or 6 releases in, this became a real
problem. The next release required the investment
of unravelling past tactical decisions. Over time the
re-hashing of previously made decisions outweighed
the agility that this organization structure had
originally produced.
Home 46
A real-life scenario
An organization who committed to decentralization
was working with a microservices architecture that
had now been widely adopted, and many small,
independent assets were created at a rapid pace. In
addition to that, the infrastructure had migrated over
to a Docker-based environment. The organization
didn’t believe they needed to align their developers
with specific technical assets.
The original thought was that any team could work
on any technical component. If the feature required
a team to add an element onto an existing screen,
that team was empowered and had free range to
modify whatever assets were needed to to
accomplish the business goal. There was a level of
coordination that occurred before the feature was
worked on so that no two teams would be working
on the same code at the same time. This avoided the
need for merging of code.
In the beginning, for the first 4-5 releases, this
worked out beautifully. Teams could work
independently and could move quickly. However,
over time problems started to arise.
Lessons Learned
The solution
The solution was to align teams to microservices
components, and create clear delineation of
responsibilities. These needed to be done
through a rational approach. The first step was
to break down the entire solution into bounded
contexts, then assign teams ownership over
those bounded context. A bounded context is
simply a business objective and a grouping of
business functions. An individual team could
own many microservices components,
however those assets all had to be aligned to
the same business objective. Clear lines of
ownership and responsibility meant that the
team thought more strategically about code
modifications. The gravity of creating good
regression tests was now much more
important since each team knew they would
have to live with their past decisions.
Importantly, another dimension of these new
ownership lines meant less handoffs between
teams to accomplish a business objective.
One team would own the business function
from start to finish - they would modify the
front-end code, the integration layer and the
back-end code, including the storage. This
grouping of assets is clearly defined in
microservices architecture, and that principle
should also carry through to organization
structures to reduce the handoffs between
teams and increase operational efficiency.
47
If we are to be truly affective in transitioning to
an agile integration architecture, we will need to
do more than simply break out the integrations
into separate containers. We also need to apply
a cloud native - “cattle not pets” - approach to
the design and configuration of our integrations.
As a result of moving to a fully cloud native
approach, integration then becomes just
another option in the toolbox of lightweight
runtimes available to people building
microservices based applications. Instead of
just using integration to connect applications
together, it can now also be used within
applications where a component performs an
integration centric task.
Times have changed. Hardware is virtualized.
Also, with container technologies, such as
Docker, you can reduce the surrounding
operating system to a minimum so that you can
start an isolated process in seconds at most.
Using cloud-based infrastructure, scaling can be
horizontal, adding and removing servers or
containers at will, and adopting a usage-based
pricing model. With that freedom, you can now
deploy thin slivers of application logic on
minimalist runtimes into lightweight
independent containers. Running significantly
more than just a pair of containers is common
and limits the effects of one container going
down. By using container orchestration
frameworks, such as Kubernetes, you can
introduce or dispose of containers rapidly to
scale workloads up and down. These containers
are treated more like a herd of cattle.
Let take a brief look at where that concept came
from before we discuss how to apply it in the
integration space.
In a time when servers took weeks to provision
and minutes to start, it was fashionable to boast
about how long you could keep your servers
running without failure. Hardware was expensive,
and the more applications you could pack onto a
server, the lower your running costs were. High
availability (HA) was handled by using pairs of
servers, and scaling was vertical by adding more
cores to a machine. Each server was unique,
precious, and treated, well, like a pet.
Let’s examine what the common “pets”
model looks like. In the analogy, if you
view a server (or a pair of servers that
attempt to appear as a single unit) as
indispensable, it is a pet. In the context
of integration, this concept is similar to
the centralized integration topologies
that the traditional approach has used to
solve enterprise application integration
(EAI) and service-oriented architecture
use cases.
Chapter 6: Aspect 3: Cloud native integration
infrastructure
Integration pets:
The traditional approach
Cattle not pets
Home 47
Using cloud-based
infrastructure provides freedom
to deploy thin slivers of
application logic on minimalist
runtimes into lightweight
independent containers.
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018
Agile Integration eBook from 2018

More Related Content

What's hot

The evolving story for Agile Integration Architecture in 2019
The evolving story for Agile Integration Architecture in 2019The evolving story for Agile Integration Architecture in 2019
The evolving story for Agile Integration Architecture in 2019Kim Clark
 
Real-time Stream Processing with Apache Flink
Real-time Stream Processing with Apache FlinkReal-time Stream Processing with Apache Flink
Real-time Stream Processing with Apache FlinkDataWorks Summit
 
Microservices Architecture - Cloud Native Apps
Microservices Architecture - Cloud Native AppsMicroservices Architecture - Cloud Native Apps
Microservices Architecture - Cloud Native AppsAraf Karsh Hamid
 
Kafka connect 101
Kafka connect 101Kafka connect 101
Kafka connect 101Whiteklay
 
Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...
Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...
Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...HostedbyConfluent
 
Monoliths and Microservices
Monoliths and Microservices Monoliths and Microservices
Monoliths and Microservices Bozhidar Bozhanov
 
Container orchestration overview
Container orchestration overviewContainer orchestration overview
Container orchestration overviewWyn B. Van Devanter
 
Kafka Intro With Simple Java Producer Consumers
Kafka Intro With Simple Java Producer ConsumersKafka Intro With Simple Java Producer Consumers
Kafka Intro With Simple Java Producer ConsumersJean-Paul Azar
 
Building Event-Driven Services with Apache Kafka
Building Event-Driven Services with Apache KafkaBuilding Event-Driven Services with Apache Kafka
Building Event-Driven Services with Apache Kafkaconfluent
 
Microservices, Containers, Kubernetes, Kafka, Kanban
Microservices, Containers, Kubernetes, Kafka, KanbanMicroservices, Containers, Kubernetes, Kafka, Kanban
Microservices, Containers, Kubernetes, Kafka, KanbanAraf Karsh Hamid
 
Saga about distributed business transactions in microservices world
Saga about distributed business transactions in microservices worldSaga about distributed business transactions in microservices world
Saga about distributed business transactions in microservices worldMikalai Alimenkou
 
Service Mesh - Why? How? What?
Service Mesh - Why? How? What?Service Mesh - Why? How? What?
Service Mesh - Why? How? What?Orkhan Gasimov
 
Principles of microservices XP Days Ukraine
Principles of microservices   XP Days UkrainePrinciples of microservices   XP Days Ukraine
Principles of microservices XP Days UkraineSam Newman
 
Azure kubernetes service (aks)
Azure kubernetes service (aks)Azure kubernetes service (aks)
Azure kubernetes service (aks)Akash Agrawal
 
Apache Camel v3, Camel K and Camel Quarkus
Apache Camel v3, Camel K and Camel QuarkusApache Camel v3, Camel K and Camel Quarkus
Apache Camel v3, Camel K and Camel QuarkusClaus Ibsen
 
Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023
Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023
Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023Steve Pember
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideBytemark
 

What's hot (20)

The evolving story for Agile Integration Architecture in 2019
The evolving story for Agile Integration Architecture in 2019The evolving story for Agile Integration Architecture in 2019
The evolving story for Agile Integration Architecture in 2019
 
Real-time Stream Processing with Apache Flink
Real-time Stream Processing with Apache FlinkReal-time Stream Processing with Apache Flink
Real-time Stream Processing with Apache Flink
 
Microservices Architecture - Cloud Native Apps
Microservices Architecture - Cloud Native AppsMicroservices Architecture - Cloud Native Apps
Microservices Architecture - Cloud Native Apps
 
Kafka connect 101
Kafka connect 101Kafka connect 101
Kafka connect 101
 
Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...
Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...
Apicurio Registry: Event-driven APIs & Schema governance for Apache Kafka | F...
 
Monoliths and Microservices
Monoliths and Microservices Monoliths and Microservices
Monoliths and Microservices
 
Container orchestration overview
Container orchestration overviewContainer orchestration overview
Container orchestration overview
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 
Kafka Intro With Simple Java Producer Consumers
Kafka Intro With Simple Java Producer ConsumersKafka Intro With Simple Java Producer Consumers
Kafka Intro With Simple Java Producer Consumers
 
Building Event-Driven Services with Apache Kafka
Building Event-Driven Services with Apache KafkaBuilding Event-Driven Services with Apache Kafka
Building Event-Driven Services with Apache Kafka
 
IBM MQ vs Apache ActiveMQ
IBM MQ vs Apache ActiveMQIBM MQ vs Apache ActiveMQ
IBM MQ vs Apache ActiveMQ
 
Microservices, Containers, Kubernetes, Kafka, Kanban
Microservices, Containers, Kubernetes, Kafka, KanbanMicroservices, Containers, Kubernetes, Kafka, Kanban
Microservices, Containers, Kubernetes, Kafka, Kanban
 
Saga about distributed business transactions in microservices world
Saga about distributed business transactions in microservices worldSaga about distributed business transactions in microservices world
Saga about distributed business transactions in microservices world
 
Apache Kafka
Apache KafkaApache Kafka
Apache Kafka
 
Service Mesh - Why? How? What?
Service Mesh - Why? How? What?Service Mesh - Why? How? What?
Service Mesh - Why? How? What?
 
Principles of microservices XP Days Ukraine
Principles of microservices   XP Days UkrainePrinciples of microservices   XP Days Ukraine
Principles of microservices XP Days Ukraine
 
Azure kubernetes service (aks)
Azure kubernetes service (aks)Azure kubernetes service (aks)
Azure kubernetes service (aks)
 
Apache Camel v3, Camel K and Camel Quarkus
Apache Camel v3, Camel K and Camel QuarkusApache Camel v3, Camel K and Camel Quarkus
Apache Camel v3, Camel K and Camel Quarkus
 
Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023
Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023
Anatomy of a Spring Boot App with Clean Architecture - Spring I/O 2023
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory Guide
 

Similar to Agile Integration eBook from 2018

V mware organizing-for-the-cloud-whitepaper
V mware organizing-for-the-cloud-whitepaperV mware organizing-for-the-cloud-whitepaper
V mware organizing-for-the-cloud-whitepaperEMC
 
A Cloud Decision making Framework
A Cloud Decision making FrameworkA Cloud Decision making Framework
A Cloud Decision making FrameworkAndy Marshall
 
Cisco Cloud Computing White Paper
Cisco Cloud Computing White PaperCisco Cloud Computing White Paper
Cisco Cloud Computing White Paperlamcindoe
 
Cisco Cloud White Paper
Cisco  Cloud  White  PaperCisco  Cloud  White  Paper
Cisco Cloud White Paperjtiblier
 
S00193ed1v01y200905cac006
S00193ed1v01y200905cac006S00193ed1v01y200905cac006
S00193ed1v01y200905cac006guest120d945
 
Networking guide lync_server
Networking guide lync_serverNetworking guide lync_server
Networking guide lync_serverPeter Diaz
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud ComputingGoodzuma
 
1 cloudcomputing intro
1 cloudcomputing intro1 cloudcomputing intro
1 cloudcomputing introyogiman17
 
Cloudcomputing sun
Cloudcomputing sunCloudcomputing sun
Cloudcomputing sunNikkk20
 
Multi-Cloud Service Delivery and End-to-End Management
Multi-Cloud Service Delivery and End-to-End ManagementMulti-Cloud Service Delivery and End-to-End Management
Multi-Cloud Service Delivery and End-to-End ManagementEric Troup
 
Life above the_service_tier_v1.1
Life above the_service_tier_v1.1Life above the_service_tier_v1.1
Life above the_service_tier_v1.1Ganesh Prasad
 
Mohan_Dissertation (1)
Mohan_Dissertation (1)Mohan_Dissertation (1)
Mohan_Dissertation (1)Mohan Bhargav
 

Similar to Agile Integration eBook from 2018 (20)

V mware organizing-for-the-cloud-whitepaper
V mware organizing-for-the-cloud-whitepaperV mware organizing-for-the-cloud-whitepaper
V mware organizing-for-the-cloud-whitepaper
 
A Cloud Decision making Framework
A Cloud Decision making FrameworkA Cloud Decision making Framework
A Cloud Decision making Framework
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cisco Cloud Computing White Paper
Cisco Cloud Computing White PaperCisco Cloud Computing White Paper
Cisco Cloud Computing White Paper
 
Cisco Cloud White Paper
Cisco  Cloud  White  PaperCisco  Cloud  White  Paper
Cisco Cloud White Paper
 
IBM Cloud
IBM CloudIBM Cloud
IBM Cloud
 
S00193ed1v01y200905cac006
S00193ed1v01y200905cac006S00193ed1v01y200905cac006
S00193ed1v01y200905cac006
 
Networking guide lync_server
Networking guide lync_serverNetworking guide lync_server
Networking guide lync_server
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
1 cloudcomputing intro
1 cloudcomputing intro1 cloudcomputing intro
1 cloudcomputing intro
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Intel Cloud
Intel CloudIntel Cloud
Intel Cloud
 
ITSM Approach for Clouds
 ITSM Approach for Clouds ITSM Approach for Clouds
ITSM Approach for Clouds
 
Cloudcomputing sun
Cloudcomputing sunCloudcomputing sun
Cloudcomputing sun
 
Embedding BI
Embedding BIEmbedding BI
Embedding BI
 
Multi-Cloud Service Delivery and End-to-End Management
Multi-Cloud Service Delivery and End-to-End ManagementMulti-Cloud Service Delivery and End-to-End Management
Multi-Cloud Service Delivery and End-to-End Management
 
This is
This is This is
This is
 
CS4099Report
CS4099ReportCS4099Report
CS4099Report
 
Life above the_service_tier_v1.1
Life above the_service_tier_v1.1Life above the_service_tier_v1.1
Life above the_service_tier_v1.1
 
Mohan_Dissertation (1)
Mohan_Dissertation (1)Mohan_Dissertation (1)
Mohan_Dissertation (1)
 

More from Kim Clark

Cloud native defined
Cloud native definedCloud native defined
Cloud native definedKim Clark
 
2008-2014 Integration Design - Course Summary for slideshare.pdf
2008-2014 Integration Design - Course Summary for slideshare.pdf2008-2014 Integration Design - Course Summary for slideshare.pdf
2008-2014 Integration Design - Course Summary for slideshare.pdfKim Clark
 
Interface characteristics - Kim Clark and Brian Petrini
Interface characteristics - Kim Clark and Brian PetriniInterface characteristics - Kim Clark and Brian Petrini
Interface characteristics - Kim Clark and Brian PetriniKim Clark
 
Implementing zero trust in IBM Cloud Pak for Integration
Implementing zero trust in IBM Cloud Pak for IntegrationImplementing zero trust in IBM Cloud Pak for Integration
Implementing zero trust in IBM Cloud Pak for IntegrationKim Clark
 
Automating agile integration
Automating agile integrationAutomating agile integration
Automating agile integrationKim Clark
 
The resurgence of event driven architecture
The resurgence of event driven architectureThe resurgence of event driven architecture
The resurgence of event driven architectureKim Clark
 
Convergence of Integration and Application Development
Convergence of Integration and Application DevelopmentConvergence of Integration and Application Development
Convergence of Integration and Application DevelopmentKim Clark
 
Scaling Integration
Scaling IntegrationScaling Integration
Scaling IntegrationKim Clark
 
Multi-cloud integration architecture
Multi-cloud integration architectureMulti-cloud integration architecture
Multi-cloud integration architectureKim Clark
 
Agile Integration Architecture: A Containerized and Decentralized Approach to...
Agile Integration Architecture: A Containerized and Decentralized Approach to...Agile Integration Architecture: A Containerized and Decentralized Approach to...
Agile Integration Architecture: A Containerized and Decentralized Approach to...Kim Clark
 
Where can you use serverless?  How does it relate to APIs, integration and mi...
Where can you use serverless?  How does it relate to APIs, integration and mi...Where can you use serverless?  How does it relate to APIs, integration and mi...
Where can you use serverless?  How does it relate to APIs, integration and mi...Kim Clark
 
Building enterprise depth APIs with the IBM hybrid integration portfolio
Building enterprise depth APIs with the IBM hybrid integration portfolioBuilding enterprise depth APIs with the IBM hybrid integration portfolio
Building enterprise depth APIs with the IBM hybrid integration portfolioKim Clark
 
3298 microservices and how they relate to esb api and messaging - inter con...
3298   microservices and how they relate to esb api and messaging - inter con...3298   microservices and how they relate to esb api and messaging - inter con...
3298 microservices and how they relate to esb api and messaging - inter con...Kim Clark
 
Hybrid integration reference architecture
Hybrid integration reference architectureHybrid integration reference architecture
Hybrid integration reference architectureKim Clark
 
MuCon 2015 - Microservices in Integration Architecture
MuCon 2015 - Microservices in Integration ArchitectureMuCon 2015 - Microservices in Integration Architecture
MuCon 2015 - Microservices in Integration ArchitectureKim Clark
 
Microservices: Where do they fit within a rapidly evolving integration archit...
Microservices: Where do they fit within a rapidly evolving integration archit...Microservices: Where do they fit within a rapidly evolving integration archit...
Microservices: Where do they fit within a rapidly evolving integration archit...Kim Clark
 
Placement of BPM runtime components in an SOA environment
Placement of BPM runtime components in an SOA environmentPlacement of BPM runtime components in an SOA environment
Placement of BPM runtime components in an SOA environmentKim Clark
 
What’s behind a high quality web API? Ensure your APIs are more than just a ...
What’s behind a high quality web API? Ensure your APIs are more than just a ...What’s behind a high quality web API? Ensure your APIs are more than just a ...
What’s behind a high quality web API? Ensure your APIs are more than just a ...Kim Clark
 
Differentiating between web APIs, SOA, & integration …and why it matters
Differentiating between web APIs, SOA, & integration…and why it mattersDifferentiating between web APIs, SOA, & integration…and why it matters
Differentiating between web APIs, SOA, & integration …and why it mattersKim Clark
 

More from Kim Clark (19)

Cloud native defined
Cloud native definedCloud native defined
Cloud native defined
 
2008-2014 Integration Design - Course Summary for slideshare.pdf
2008-2014 Integration Design - Course Summary for slideshare.pdf2008-2014 Integration Design - Course Summary for slideshare.pdf
2008-2014 Integration Design - Course Summary for slideshare.pdf
 
Interface characteristics - Kim Clark and Brian Petrini
Interface characteristics - Kim Clark and Brian PetriniInterface characteristics - Kim Clark and Brian Petrini
Interface characteristics - Kim Clark and Brian Petrini
 
Implementing zero trust in IBM Cloud Pak for Integration
Implementing zero trust in IBM Cloud Pak for IntegrationImplementing zero trust in IBM Cloud Pak for Integration
Implementing zero trust in IBM Cloud Pak for Integration
 
Automating agile integration
Automating agile integrationAutomating agile integration
Automating agile integration
 
The resurgence of event driven architecture
The resurgence of event driven architectureThe resurgence of event driven architecture
The resurgence of event driven architecture
 
Convergence of Integration and Application Development
Convergence of Integration and Application DevelopmentConvergence of Integration and Application Development
Convergence of Integration and Application Development
 
Scaling Integration
Scaling IntegrationScaling Integration
Scaling Integration
 
Multi-cloud integration architecture
Multi-cloud integration architectureMulti-cloud integration architecture
Multi-cloud integration architecture
 
Agile Integration Architecture: A Containerized and Decentralized Approach to...
Agile Integration Architecture: A Containerized and Decentralized Approach to...Agile Integration Architecture: A Containerized and Decentralized Approach to...
Agile Integration Architecture: A Containerized and Decentralized Approach to...
 
Where can you use serverless?  How does it relate to APIs, integration and mi...
Where can you use serverless?  How does it relate to APIs, integration and mi...Where can you use serverless?  How does it relate to APIs, integration and mi...
Where can you use serverless?  How does it relate to APIs, integration and mi...
 
Building enterprise depth APIs with the IBM hybrid integration portfolio
Building enterprise depth APIs with the IBM hybrid integration portfolioBuilding enterprise depth APIs with the IBM hybrid integration portfolio
Building enterprise depth APIs with the IBM hybrid integration portfolio
 
3298 microservices and how they relate to esb api and messaging - inter con...
3298   microservices and how they relate to esb api and messaging - inter con...3298   microservices and how they relate to esb api and messaging - inter con...
3298 microservices and how they relate to esb api and messaging - inter con...
 
Hybrid integration reference architecture
Hybrid integration reference architectureHybrid integration reference architecture
Hybrid integration reference architecture
 
MuCon 2015 - Microservices in Integration Architecture
MuCon 2015 - Microservices in Integration ArchitectureMuCon 2015 - Microservices in Integration Architecture
MuCon 2015 - Microservices in Integration Architecture
 
Microservices: Where do they fit within a rapidly evolving integration archit...
Microservices: Where do they fit within a rapidly evolving integration archit...Microservices: Where do they fit within a rapidly evolving integration archit...
Microservices: Where do they fit within a rapidly evolving integration archit...
 
Placement of BPM runtime components in an SOA environment
Placement of BPM runtime components in an SOA environmentPlacement of BPM runtime components in an SOA environment
Placement of BPM runtime components in an SOA environment
 
What’s behind a high quality web API? Ensure your APIs are more than just a ...
What’s behind a high quality web API? Ensure your APIs are more than just a ...What’s behind a high quality web API? Ensure your APIs are more than just a ...
What’s behind a high quality web API? Ensure your APIs are more than just a ...
 
Differentiating between web APIs, SOA, & integration …and why it matters
Differentiating between web APIs, SOA, & integration…and why it mattersDifferentiating between web APIs, SOA, & integration…and why it matters
Differentiating between web APIs, SOA, & integration …and why it matters
 

Recently uploaded

ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDELiveplex
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfinfogdgmi
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?IES VE
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsSeth Reyes
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPathCommunity
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAshyamraj55
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureEric D. Schabell
 

Recently uploaded (20)

ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
 
Videogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdfVideogame localization & technology_ how to enhance the power of translation.pdf
Videogame localization & technology_ how to enhance the power of translation.pdf
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and Hazards
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity Webinar
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation Developers
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPAAnypoint Code Builder , Google Pub sub connector and MuleSoft RPA
Anypoint Code Builder , Google Pub sub connector and MuleSoft RPA
 
20150722 - AGV
20150722 - AGV20150722 - AGV
20150722 - AGV
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
201610817 - edge part1
201610817 - edge part1201610817 - edge part1
201610817 - edge part1
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability Adventure
 

Agile Integration eBook from 2018

  • 1. Agile integration architecture Using lightweight integration runtimes to implement a container-based and microservices-aligned integration architecture
  • 2. 222Home Authors Chapter 1: Integration has changed Section 1: The Impact of Digital Transformation on Integration Chapter 2: The journey so far: SOA, ESBs and APIs Chapter 3: The case for agile integration architecture How to navigate the book ............................................................................................................................. The impact of digital transformation ........................................................................................................... Microservice architecture ............................................................................................................................ The forming of the ESB pattern ................................................................................................................... The value of application integration for digital transformation .................................................................. Agile integration architecture ...................................................................................................................... What went wrong for the centralized ESB pattern? .................................................................................... Aspect 3: Cloud-native integration infrastructure ....................................................................................... Microservices architecture: A more agile and scalable way to build applications ..................................... Aspect 1: Fine-grained integration deployment ......................................................................................... The API economy and bi-modal IT .............................................................................................................. How has the modern integration runtime changed to accommodate agile integration architecture? ...... A comparison of SOA and microservice architecture? ................................................................................ Aspect 2: Decentralized integration ownership .......................................................................................... The rise of lightweight runtimes .................................................................................................................. 5 7 8 9 12 13 14 16 16 18 20 20 9 22 25 10 10 22 25 24 Contents:
  • 3. 3 Contents: Chapter 4: Aspect 1: Fine-grained integration deployment Section 2: Exploring agile integration architecture in detail What characteristics does the integration runtime need? .......................................................................... Granularity .................................................................................................................................................... Conclusion on fine-grained integration deployment ................................................................................... Lessons Learned ........................................................................................................................................... 27 27 27 29 30 31 32 Chapter 6: Aspect 3: Cloud native integration infrastructure Chapter 5: Aspect 2: Decentralized integration ownership Cattle not pets ............................................................................................................................................... Integration pets: The traditional approach .................................................................................................. Decentralizing integration ownership .......................................................................................................... Does decentralized integration also mean decentralized infrastructure ................................................... Moving to a decentralized, business-focused team structure .................................................................... Benefits for cloud ......................................................................................................................................... Big bangs generally lead to big disasters ..................................................................................................... Prioritizing Project Delivery First .................................................................................................................. Enforcing governance in a decentralized structure ..................................................................................... Evolving the role of the Architect ................................................................................................................. How can we have multi-skilled developers? ................................................................................................ Conclusions on decentralized integration ownership ................................................................................. Lessons Learned ........................................................................................................................................... Traditional centralized technology-based organization .............................................................................. 45 46 47 33 33 35 37 47 47 36 38 39 40 41 43 36 Breaking up the centralized ESB ................................................................................................................... 3Home
  • 4. 4 Contents: What’s so different with cattle ..................................................................................................................... Pros and cons ............................................................................................................................................... Application and integration handled by the same team ............................................................................. Common infrastructure enabling multi-skilled development ..................................................................... Portability: Public, private, multicloud ......................................................................................................... Conclusion on cloud native integration infrastructure ................................................................................ Lessons Learned ........................................................................................................................................... 49 50 52 52 55 56 57 58 59 Chapter 8: Agile integration architecture for the Integration Platform Section 3: Moving Forward with an Agile Integration Architecture Chapter 7: What path should you take? What is an integration platform? .................................................................................................................. The IBM Cloud Integration Platform ............................................................................................................ Emerging use cases and the integration platform ....................................................................................... Appendix One: References Don’t worry…we haven’t returned to point-to-point .................................................................................... Deployment options for fine-grained integration ......................................................................................... Agile integration architecture and IBM ........................................................................................................ 65 72 59 63 60 63 63 63 61 Integration cattle: An alternative lightweight approach ............................................................................. 4Home
  • 5. 5 Kim Clark Integration Architect kim.clark@uk.ibm.com Kim is a technical strategist on IBMs integration portfolio working as an architect providing guidance to the offering management team on current trends and challenges. He has spent the last couple of decades working in the field implementing integration and process related solutions. Tony Curcio Director Application Integration tcurcio@us.ibm.com After years of implementing integration solutions in a variety of technologies, Tony joined the IBM offering management team in 2008. He now leads the Application Integration team in working with customers as they adopt more agile models for building integration solutions and embrace cloud as part of their IT landscape. Nick Glowacki Technical Specialist nick.glowacki@ibm.com Nick is a technical evangelist for IBMs integration portfolio working as a technical specialist exploring current trends and building leading edge solutions. He has spent the last 5 years working in the field and guiding a series of teams through their microservices journey. Before that he spent 5+ years in various other roles such as a developer, an architect and a IBM DataPower specialist. Over the course of his career he’s been a user of node, xsl, JSON, Docker, Solr, IBM API Connect, Kubernetes, Java, SOAP, XML, WAS, Docker, Filenet, MQ, C++, CastIron, IBM App Connect, IBM Integration Bus. Authors Sincere thanks go to the following people for their significant and detailed input and review of the material: Carsten Bornert, Andy Garratt, Alan Glickenhouse, Rob Nicholson, Brian Petrini, Claudio Tagliabue, and Ben Thompson. THANK YOU 5Home
  • 6. 6 Executive Summary The organization pursuing digital transformation must embrace new ways to use and deploy integration technologies, so they can move quickly in a manner appropriate to the goals of multicloud, decentralization and microservices. The application integration layer must transform to allow organizations to move boldly in building new customer experiences, rather than forcing models for architecture and development that pull away from maximizing the organization’s productivity. Many organizations have started embracing agile application techniques such as microservice architecture and are now starting to see the benefits of that shift. This approach complements and accelerates an enterprise’s API strategy. Businesses should also seek to use this approach to modernize their existing ESB infrastructure to achieve more effective ways to manage and operate their integration services in their private or public cloud. This book explores the merits of what we refer to as agile integration architecture1 - a container-based, decentralized and microservice- aligned approach for integration solutions that meets the demands of agility, scalability and resilience required by digital transformation. 6Home 1 Note that we have used the term “lightweight integration” in the past, but have moved to the more appropriate “agile integration architecture”. Agile integration architecture enables building, managing and operating effectively and efficiently to achieve the goals of digital transformation. It includes three distinct aspects that we will explore in detail: a) Fine-grained integration deployment | b) Decentralized integration ownership and | c) Cloud-native integration infrastructure
  • 7. 77Home Chapter 1: Integration has changed Explores the effect that digital transformation has had on both the application and integration landscape, and the limitations of previous techniques. Chapter 2: The journey so far: SOA, ESBs and APIs Explores what led us up to this point, the pros and cons of SOA and the ESB pattern, the influence of APIs and the introduction of microservices architecture. Chapter 3: The case for agile integration architecture Explains how agile integration architecture exploits the principles of microservices architecture to address these new needs. Chapter 4: Aspect 1: Fine-grained integration deployment Addresses the benefits an organization gains by breaking up the centralized ESB. Chapter 5: Aspect 2: Decentralized integration ownership Discusses how shifting from a centralized governance and development practice creates new levels of agility and innovation. Chapter 6: Aspect 3: Cloud native integration infrastructure Provides a description of how adopting key technologies and practices from the cloud native application discipline can provide similar benefits to application integration. Chapter 7: What path should you take? Explores several ways agile integration architecture can be approached Chapter 8: Agile integration architecture for the Integration Platform Surveys the wider landscape of integration capabilities and relates agile integration architecture to other styles of integration as part of a holistic strategy. How to navigate the book The book is divided into three sections. Section 1: The Impact of Digital Transformation on Integration Section 2: Exploring agile integration architecture in detail Section 3: Moving Forward with an Agile Integration Architecture
  • 8. 8 The impact of digital transformation The rise of the digital economy, like most of the seismic technology shifts over the past several centuries, has fundamentally changed not only technology but business as well. The very concept of “digital economy” continues to evolve. Where once it was just a section of the economy that was built on digital technologies it has evolved becoming almost indistinguishable from the “traditional economy” and growing to include almost any new technology such as mobile, the Internet of Things, cloud computing, and augmented intelligence. At the heart of the digital economy is the basic need to connect disparate data no matter where it lives. This has led to the rise of application integration, the need to connect multiple applications and data to deliver the greatest insight to the people and systems who can act on it. In this section we will explore how the digital economy created and then altered our concept of application integration. - Chapter 1: Integration has changed Explores the effect that digital transformation has had on both the application and integration landscape, and the limitations of previous techniques. - Chapter 2: The journey so far: SOA, ESBs and APIs Explores what led us up to this point, the pros and cons of SOA and the ESB pattern, the influence of APIs and the introduction of microservices architecture. - Chapter 3: The case for agile integration architecture Explains how agile integration architecture exploits the principles of microservices architecture to address these new needs. Section 1: The Impact of Digital Transformation on Integration 88Home
  • 9. 9 changes in how organizations are building solutions. Progressive IT shops have sought out, and indeed found, more agile ways to develop than were typical even just a few years ago. Home 9 To drive new customer experiences organizations must tap into an ever-growing set of applications, processes and information sources – all of which significantly expand the enterprise’s need for and investment in integration capabilities. Chapter 1: Integration has changed The impact of digital transformation Over the last two years we’ve seen a tremendous acceleration in the pace that customers are establishing digital transformation initiatives. In fact, IDC estimates that digital transformation initiatives represent a $20 trillion market opportunity over the next 5 years. That is a staggering figure with respect to the impact across all industries and companies of all sizes. A primary focus of this digital transformation is to build new customer experiences through connected experiences across a network of applications that leverage data of all types. However, bringing together these processes and information sources at the right time and within the right context has become increasingly complicated. Consider that many organizations have aggressively adopted SaaS business applications which have spread their key data sources across a much broader landscape. Additionally, new data sources that are available from external data providers must be injected into business processes to create competitive differentiation. Finally, AI capabilities - which are being attached to many customer-facing applications - require a broad range of information to train, improve and correctly respond to business events. These processes and information sources need to be integrated by making them accessible synchronously via APIs, propagated in near real time by event streams, and a multitude of other mechanisms, more so than ever before. It is no wonder that this growing complexity has increased the enterprise’s need for and investment in integration capabilities. The pace of these investments, in both digital transformation generally and integration specifically, have led to a series of 2 IDC MaturityScape Benchmark: Digital Transformation Worldwide, 2017, Shawn Fitzgerald.
  • 10. 10 2. Expertise of the endpoints: Each system has its own peculiarities that must be understood and responded to. Modern integration includes smarts around complex protocols and data formats, but it goes much further than that. It also incorporates intelligence about the actual objects, business and functions within the end systems. Application integration tooling is compassionate - understanding how to work with each system distinctly. This knowledge of the endpoint must include not only errors, but authentication protocols, load management, performance optimization, transactionality, idempotence, and much, much more. By including such features “in the box”, application integration yields tremendous gains in productivity over coding, and arguably a more consistent level of enterprise-class resiliency. The value of application integration for digital transformation 1. Effectively address disparity: One of the key strengths of integration tooling is the ability to access data from any system with any sort of data in any sort of format and build homogeneity. The application landscape is only growing more diverse as organizations adopt SaaS applications and build new solutions in the cloud, spreading their data further across a hybrid set of systems. Even in the world of APIs, there are variations in data formats and structures that must be addressed. Furthermore, every system has subtleties in the way it enables updates, and surfaces events. The need for the organization to address information disparity is therefore growing at that same pace, and application integration must remain equipped to address the challenge of emerging formats. HomeHome 10 When we consider the agenda for building new customer experiences and focus on how data is accessed and made available for the services and APIs that power these initiatives, we can clearly recognize several significant benefits that application integration brings to the table.
  • 11. 11 4. Enterprise-grade artifacts: Integration flows developed through application integration tooling inherit a tremendous amount of value from the runtime. Users can focus on building the business logic without having to worry about the surrounding infrastructure. The application integration runtime includes enterprise-grade features for error recovery, fault tolerance, log capture, performance analysis, message tracing, transactional update and recovery. Additionally, in some tools the artifacts are built using open standards and consistent best practices without requirements for the IT team to be experts in those domains. Each of these factors (data disparity, expert endpoints, innovation through data, and enterprise grade artifacts) is causing a massive shift in how an integration architecture needs to be conceived, implemented and managed. The result is that organizations, and architects in particular, are reconsidering what integration means in the new digital age. Enter agile integration architecture, a container-based, decentralized and microservices-aligned approach for integration solutions that meets the demands of agility, scalability and resilience required by digital transformation. The integration landscape is changing apace with enterprise and marketplace computing demands, but how did we get from SOA and ESBs to modern, containerized, agile integration architecture? HomeHome 11 3. Innovation through data: Applications in a digital world owe much of their innovation to their opportunity to combine data that is beyond their boundaries and create new meaning from it. This is particularly visible in microservices architecture, where the ability of application integration technologies to intelligently draw multiple sources of data together is often a core business requirement. Whether composing multiple API calls together or interpreting event streams, the main task of many microservices components is essentially integration. Application Integration benefits organizations building digital transformation solutions by effectively addressing information disparity, providing expert knowledge of application endpoints, easily orchestrating activities across applications, and lowering the cost of building expert-level artifacts.
  • 12. 12 Chapter 2: The journey so far: SOA, ESBs and APIs Before we dive into agile integration architecture, we first need to understand what came before in a little more detail. In this chapter we will briefly look at the challenges of SOA by taking a closer look at what the ESB pattern was, how it evolved, where APIs came onto the scene, and the relationship between all that and microservices architecture. Let’s start with SOA and the ESB and what went wrong. As we started the millennium, we saw the beginnings of the first truly cross-platform protocol for interfaces. The internet, and with it HTTP, had become ubiquitous, XML was limping its way into existence off the back of HTML, and the SOAP protocols for providing synchronous web service interfaces were just taking shape. Relatively wide acceptance of these standards hinted at a brighter future where any system could discover and talk to any other system via a real-time synchronous remote procedure call, without reams of integration code as had been required in the past. From this series of events, service-oriented architecture was born. The core purpose of SOA was to expose data and functions buried in systems of record over well-formed, simple-to-use, synchronous interfaces, such as web services. Clearly, SOA was about more than just providing those services, and often involved some significant re-engineering to align the back-end systems with the business needs, but the end goal was a suite of well-defined common re-usable services collating disparate systems. This would enable new applications to be implemented without the burden of deep integration every time, as once the integration was done for the first time and exposed as a service, it could be re-used by the next application. However, this simple integration was a one-sided equation. We might have been able to standardize these protocols and data formats, but the back-end systems of record were typically old and had antiquated protocols and data formats for their current interfaces. Figure 1 below shows where the breakdown typically occurred. Something was needed to mediate between the old system and the new cross-platform protocols. The forming of the ESB pattern Systems ofRecordEngagement Applications Integration Runtime Integration Runtime Scope of the ESB pattern Asynchronous integration Request/response integration Integration runtime Enterprise API Figure 1. Synchronous centralized exposure pattern Home 12
  • 13. 13 While many large enterprises successfully implemented the ESB pattern, the term is often disparaged in the cloud-native space, and especially in relation to microservices architecture. It is seen as heavyweight and lacking in agility. What has happened to make the ESB pattern appear so outdated? SOA turned out to be a little more complex than just the implementation of an ESB for a host of reasons—not the least of which was the question of who would fund such an enterprise-wide program. Implementing the ESB pattern itself also turned out to be no small task. The ESB pattern often took the “E” in ESB very literally and implemented a single infrastructure for the whole enterprise, or at least one for each significant part of the enterprise. Tens or even hundreds of integrations might have been installed on a production server cluster, and if that was scaled up, they would be present on every clone within that cluster. Although this heavy centralization isn’t required by the ESB pattern itself, it was almost always present in the resultant topology. There were good reasons for this, at least initially: hardware and software costs were shared, provisioning of the servers only had to be performed once, and due to the relative complexity of the software, only one dedicated team of integration specialists needed to be skilled up to perform the development work. The centralized ESB pattern had the potential to deliver significant savings in integration costs if interfaces could be re-used from one project to the next (the core benefit proposition of SOA). However, coordinating such a cross-enterprise initiative and ensuring that it would get continued funding—and that the funding only applied to services that would be sufficiently re-usable to cover their creation costs—proved to be very difficult indeed. Standards and tooling were maturing at the same time as the ESB patterns were being implemented, so the implementation cost and time for providing a single service were unrealistically high. Often, line-of-business teams that were expecting a greater pace of innovation in their new applications became increasingly frustrated with SOA, and by extension the ESB pattern. Some of the challenges of a centralized ESB pattern were: • Deploying changes could potentially destabilize other unrelated interfaces running on the centralized ESB. • Servers containing many integrations had to be kept running and patched live wherever possible. This synchronous exposure pattern via web services was what the enterprise services bus (ESB) term was introduced for. It’s all in the name—a centralized “bus” that could provide web “services” across the “enterprise”. We already had the technology (the integration runtime) to provide connectivity to the back-end systems, coming from the preceding hub-and-spoke pattern. These integration runtimes could simply be taught to offer integrations synchronously via SOAP/HTTP, and we’d have our ESB. What went wrong for the centralized ESB pattern? ESB patterns have had issues ensuring continued funding for cross-enterprise initiatives since those do not apply specifically within the context of a business initiative. Home 13
  • 14. 14Home 14 • Topologies for high availability and disaster recovery were complex and expensive. • For stability, servers typically ran many versions behind the current release of software reducing productivity. • The integration specialist teams often didn’t know much about the applications they were trying to integrate with. • Pooling of specialist integration skilled people resulted in more waterfall style engagement with application teams. • Service discovery was immature so documentation became quickly outdated. The result was that creation of services by this specialist SOA team became a bottleneck for projects rather than the enabler that it was intended to be. This typically gave by association the centralized ESB pattern a bad name. Formally, as we’ve described, ESB is an architectural pattern that refers to the exposure of services. However, as mentioned above, the term is often over-simplified and applied to the integration engine that’s used to implement the pattern. This erroneously ties the static and aging centralized ESB pattern with integration engines that have changed radically over the intervening time. Integration engines of today are significantly more lightweight, easier to install and use, and can be deployed in more decentralized ways that would have been unimaginable at the time the ESB concept was born. As we will see, agile integration architecture enables us to overcome the limitations of the ESB pattern. If you would like a deeper introduction into where the ESB pattern came from and a detailed look at the benefits, and the challenges that came with it, take a look at the source material for this section in the following article: http://ibm.biz/FateOfTheESBPaper External APIs have become an essential part of the online persona of many companies, and are at least as important as its websites and mobile applications. Let’s take a brief look at how that evolved from the maturing of internal SOA based services. SOAP-style RPC interfaces proved complex to understand and use, and simpler and more consistent RESTful services provided using JSON/HTTP became a popular mechanism. But the end goal was the same: to make functions and data available via standardized interfaces so that new applications could be built on top of them more quickly. With the broadening usage of these service interfaces, both within and beyond the enterprise, more formal mechanisms for providing services were required. It quickly became clear that simply making something available over a web service interface, or latterly as a RESTful JSON/HTTP API, was only part of the story. That service needed to be easily discovered by potential consumers, who needed a path of least resistance for gaining access to it and learning how to use it. Additionally, the providers of the service or API needed to be able to place controls on its usage, such as traffic control and an appropriate security model. Figure 2 below demonstrates how the introduction of service/API gateways effects the scope of the ESB pattern. The API economy and bi-modal IT
  • 15. 15 Figure 2. Introduction of service/API gateways internally and externally The typical approach was to separate the role of service/API exposure out into a separate gateway. These capabilities evolved into what is now known as API management and enabled simple administration of the service/API. The gateways could also be specialized to focus on API management-specific capabilities, such as traffic management (rate/throughput limiting), encryption/decryption, redaction, and security patterns. The gateways could also be supplemented with portals that describe the available APIs which enable self-subscription to use the APIs along with provisioning analytics for both users and providers of the APIs. While logically, the provisioning of APIs outside the enterprise looks like just an extension of the ESB pattern, there are both significant infrastructural and design differences between externally facing APIs and internal services/APIs. • From an infrastructural point of view, it is immediately obvious that the APIs are being used by consumers and devices that may exist anywhere from a geographical and network point of view. As a result, it is necessary to design the APIs differently to take into account the bandwidth available and the capabilities of the devices used as consumers. • From a design perspective, we should not underestimate the difference in the business objectives of these APIs. External APIs are much less focused on re-use, in the way that internal APIs/ services were in SOA, and more focused on creating services targeting specific niches of potential for new business. Suitably crafted channel specific APIs provide an enterprise with the opportunity to radically broaden the number of innovation partners that it can work with (enabling crowd sourcing of new ideas), Systems ofRecordEngagement Applications Externally exposed services/APIs Exposure Gateway (external) Integration Runtime Exposure Gateway Internally exposed services/APIs Scope of the ESB pattern Asynchronous integration Request/response integration Integration runtime API Gateway Enterprise API Public API Integration Runtime Home 15
  • 16. 16 and they play a significant role in the disruption of industries that is so common today. This realization caused the birth of what we now call the API Economy, and it is a well-covered topic on IBMs “API Economy” blog. The main takeaway here is that this progression exacerbated an already growing divide between the older traditional systems of record that still perform all the most critical transactions fundamental to the business, and what became known as the systems of engagement, where innovation occurred at a rapid pace, exploring new ways of interacting with external consumers. This resulted in bi-modal IT, where new decentralized, fast-moving areas of IT needed much greater agility in their development and led to the invention of new ways of building applications using, for example, microservices architecture. The rise of lightweight runtimes Earlier, we covered the challenges of the heavily centralized integration runtime—hard to safely and quickly make changes without affecting other integrations, expensive and complex to scale, etc. Microservices architecture: A more agile and scalable way to build applications In order to meet the constant need for IT to improve agility and scalability, a next logical step in application development was to break up applications into smaller pieces and run them completely independently of one another. Eventually, these pieces became small enough that they deserved a name, and they were termed microservices. Sound familiar? It should. These were exactly the same challenges that application development teams were facing at the same time: bloated, complex application servers that contained too much interconnected and cross- dependent code, on a fragile cumbersome topology that was hard to replicate or scale. Ultimately, it was this common paradigm that led to the emergence of the principles of microservices architecture. As lightweight runtimes and application servers such as Node. js and IBM WAS Liberty were introduced— runtimes that started in seconds and had tiny footprints—it became easier to run them on smaller virtual machines, and then eventually within container technologies such as Docker. If you take a closer look at microservices concepts, you will see that it has a much broader intent than simply breaking things up into smaller pieces. There are implications for architecture, process, organization, and more—all focused on enabling organizations to better use cloud-native technology advances to increase their pace of innovation. However, focusing back on the core technological difference, these small independent microservices components can be changed in isolation to create greater agility, scaled individually to make better use of cloud-native infrastructure, and managed more ruthlessly to provide the resilience required by 24/7 online applications. Figure 3 below visualizes the microservices architecture we’ve just described. 1616Home
  • 17. 17 In theory, these principles could be used anywhere. Where we see them most commonly is in the systems of engagement layer, where greater agility is essential. However, they could also be used to improve the agility, scalability, and resilience of a system of record—or indeed anywhere else in the architecture, as you will see as we discuss agile integration architecture in more depth. Without question, microservices principles can offer significant benefits under the right circumstances. However, choosing the right time to use these techniques is critical, and getting the design of highly distributed components correct is not a trivial endeavor. Not least is your challenge of deciding the shape and size of your microservices components. Add to that equally critical design choices around the extent to which you decouple them. You need to constantly balance practical reality with aspirations for microservices-related benefits. In short, your microservices- based application is only as agile and scalable as your design is good, and your methodology is mature. Systems ofRecord Integration Runtime Exposure Gateway Microservice application boundary Asynchronous integration Request/response integration Integration runtime API Gateway Lightweight language runtime Enterprise API Public API Integration Runtime Engagement Applications Microservice Applications Microservice Applications Externally exposed services/APIs Exposure Gateway (external) Figure 3. Microservices architecture: A new way to build applications 1717Home
  • 18. 1818 Microservices inevitably gets compared to SOA in architectural discussions, not least because they share many words in common. However, as you will see, this comparison is misleading at best, since the terms apply to two very different scopes. Figure 4 demonstrates how microservices are application-scoped within the SOA enterprise service bus. Service-oriented architecture is an enterprise-wide initiative to create re-usable, synchronously available services and APIs, such that new applications can be created more quickly incorporating data from other systems. Microservices architecture, on the other hand, is an option for how you might choose to write an individual application in a way that makes that application more agile, scalable, and resilient. It’s critical to recognize this difference in scope, since some of the core principles of each approach could be completely incompatible if applied at the same scope. For example: A comparison of SOA and microservice architecture Figure 4. SOA is enterprise scoped, microservices architecture is application scoped Home Service-oriented architecture is an enterprise-wide initiative. Microservices architecture is an option for how you might choose to write an individual application.
  • 19. 19 So, in summary, SOA has an enterprise scope and looks at how integration occurs between applications. Microservices architecture has an application scope, dealing with how the internals of an application are built. This is a relatively swift explanation of a much more complex debate, which is thoroughly explored in a separate article: http://ibm.biz/MicroservicesVsSoa However, we have enough of the key concepts to now delve into the various aspects of agile integration architecture. • Re-use: In SOA, re-use of integrations is the primary goal, and at an enterprise level, striving for some level of re-use is essential. In microservices architecture, creating a microservices component that is re-used at runtime throughout an application results in dependencies that reduce agility and resilience. Microservices components generally prefer to re-use code by copy and accept data duplication to help improve decoupling between one another. • Synchronous calls: The re-usable services in SOA are available across the enterprise using predominantly synchronous protocols such as RESTful APIs. However, within a microservice application, synchronous calls introduce real-time dependencies, resulting in a loss of resilience, and also latency, which impacts performance. Within a microservices application, interaction patterns based on asynchronous communication are preferred, such as event sourcing where a publish subscribe model is used to enable a microservices component to remain up to date on changes happening to the data in another component. • Data duplication: A clear aim of providing services in an SOA is for all applications to synchronously get hold of, and make changes to, data directly at its primary source, which reduces the need to maintain complex data synchronization patterns. In microservices applications, each microservice ideally has local access to all the data it needs to ensure its independence from other microservices, and indeed from other applications—even if this means some duplication of data in other systems. Of course, this duplication adds complexity, so it needs to be balanced against the gains in agility and performance, but this is accepted as a reality of microservices design. Home 19
  • 20. 20 Chapter 3: The case for agile integration architecture Home 20 Let’s briefly explore why microservices concepts have become so popular in the application space. We can then quickly see how those principles can be applied to the modernization of integration architecture. Microservices architecture Microservices architecture is an alternative approach to structuring applications. Rather than an application being a large silo of code all running on the same server, an application is designed as a collection of smaller, completely independently running components. This enables the following benefits, which are also illustrated in Figure 5 below: Figure 5 Comparison of siloed and microservices-based applications They are small enough to be understood completely in isolation and changed independently greater Agility Their resource usage can be truly tied to the business model elastic Scalability With suitable decoupling, changes to one microservice do not affect others at runtime discrete Resilience
  • 21. 21 Microservice components are often made from pure language runtimes such as Node.js or Java, but equally they can be made from any suitably lightweight runtime. The key requirements include that they have a simple dependency- free installation, file system based deploy, start/ stop in seconds and have strong support for container-based infrastructure. Microservices architectures lead to the primary benefits of greater agility, elastic scalability, and discrete resilience. As with any new approach there are challenges too, some obvious, and some more subtle. Microservices are a radically different approach to building applications. Let’s have a brief look at some of the considerations: • Greater overall complexity: Although the individual components are potentially simpler, and as such they are easier to change and scale, the overall application is inevitably a collection of highly distributed individual parts. • Learning curve on cloud-native infrastructure: To manage the increased number of components, new technologies and frameworks are required including service discovery, workload orchestration, container management, logging frameworks and more. Platforms are available to make this easier, but it is still a learning curve. • Different design paradigms: The microservices application architecture requires fundamentally different approaches to design. For example, using eventual consistency rather than transactional interactions, or the subtleties of asynchronous communication to truly decouple components. • DevOps maturity: Microservices require a mature delivery capability. Continuous integration, deployment, and fully automated Microservices architecture enables developers to make better use of cloud native infrastructure and manage components more ruthlessly, providing the resilience and scalability required by 24/7 online applications. It also improves ownership in line with DevOps practices whereby a team can truly take responsibility for a whole microservice component throughout its lifecycle and hence make changes at a higher velocity. tests are a must. The developers who write code must be responsible for it in production. Build and deployment chains need significant changes to provide the right separation of concerns for a microservices environment. Microservices architecture is not the solution to every problem. Since there is an overhead of complexity with the microservices approach, it is critical to ensure the benefits outlined above outweigh the extra complexity. However, if applied judiciously it can provide order of magnitude benefits that would be hard to achieve any other way. Microservices architecture discussions are often heavily focused on alternate ways to build applications, but the core ideas behind it are relevant to all software components, including integration. Home 21
  • 22. 22 If what we’ve learned from microservices architecture means it sometimes makes sense to build applications in a more granular lightweight fashion, why shouldn’t we apply that to integration to? Integration is typically deployed in a very siloed and centralized fashion such as the ESB pattern. What would it look like if we were to re-visit that in the light of microservices architecture? It is this alternative approach that we call “agile integration architecture”. The centralized deployment of integration hub or enterprise services bus (ESB) patterns where all integrations are deployed to a single heavily nurtured (HA) pair of integration servers has been shown to introduce a bottleneck for projects. Any deployment to the shared servers runs the risk of destabilizing existing critical interfaces. No individual project can choose to upgrade the version of the integration middleware to gain access to new features. We could break up the enterprise-wide ESB component into smaller more manageable and dedicated pieces. Perhaps in some cases we can even get down to one runtime for each interface we expose. Agile integration architecture Aspect 1: Fine-grained integration deployment There are three related, but separate aspects to agile integration architecture: • Aspect 1: Fine-grained integration deployment. What might we gain by breaking out the integrations in the siloed ESB into separate runtimes? • Aspect 2: Decentralized integration ownership. How should we adjust the organizational structure to better leverage a more fine-grained approach? • Aspect 3: Cloud native integration infrastructure. What further benefits could we gain by a fully cloud-native approach to integration. Although these each have dedicated chapters, it’s worth taking the time to summarize them at a conceptual level here. Home 22 Agile integration architecture is defined as “a container-based, decentralized and microservices-aligned architecture for integration solutions”.
  • 23. 23 These “fine-grained integration deployment” patterns provide specialized, right-sized containers, offering improved agility, scalability and resilience, and look very different to the centralized ESB patterns of the past. Figure 6 demonstrates in simple terms how a centralized ESB differs from fine-grained integration deployment.B patterns of the past. Fine-grained integration deployment draws on the benefits of a microservices architecture we listed in the last section: agility, scalability and resilience: Different teams can work on integrations independently without deferring to a centralized group or infrastructure that can quickly become a bottleneck. Individual integration flows can be changed, rebuilt, and deployed independently of other flows, enabling safer application of changes and maximizing speed to production. Individual flows can be scaled on their own, allowing you to take advantage of efficient elastic scaling of cloud infrastructures. Home 23 Figure 6: Simplistic comparison of a centralized ESB to fine-grained integration deployment Consumers Centralized ESB Fine-grained integration deployment Integrations Providers Agility: Scalability: Resilience: Isolated integration flows that are deployed in separate containers cannot affect one another by stealing shared resources, such as memory, connections, or CPU.
  • 24. 24Home 24 Breaking the single ESB runtime up into many separate runtimes, each containing just a few integrations is explored in detail in “Chapter 4: Aspect 1: Fine grained integration deployment” A significant challenge faced by service- oriented architecture was the way that it tended to force the creation of central integration teams, and infrastructure to create the service layer. This created ongoing friction in the pace at which projects could run since they always had the central integration team as a dependency. The central team knew their integration technology well, but often didn’t understand the applications they were integrating, so translating requirements could be slow and error prone. Many organizations would have preferred the application teams own the creation of their own services, but the technology and infrastructure of the time didn’t enable that. Aspect 2: Decentralized integration ownership The move to fine-grained integration deployment opens a door such that ownership of the creation and maintenance of integrations can be distributed. It’s not unreasonable for business application teams to take on integration work, streamlining the implementation of new capabilities. This shift is discussed in more depth in “Chapter 5: Aspect 2: Decentralized integration ownership”.
  • 25. 25Home 25 Clearly, agile integration architecture requires that the integration topology be deployed very differently. A key aspect of that is a modern integration runtime that can be run in a container-based environment and is well suited to cloud-native deployment techniques. Modern integration runtimes are almost unrecognizable from their historical peers. Let’s have a look at some of those differences: • Fast lightweight runtime: They run in containers such as Docker and are sufficiently lightweight that they can be started and stopped in seconds and can be easily administered by orchestration frameworks such as Kubernetes. • Dependency free: They no longer require databases or message queues, although obviously, they are very adept at connecting to them if they need to. • File system based installation: They can be installed simply by laying their binaries out on a file system and starting them up-ideal for the layered file systems of Docker images. • DevOps tooling support: The runtime should be continuous integration and deployment-ready. Script and property file-based install, build, deploy, and configuration to enable “infrastructure as code” practices. Template scripts for standard build and deploy tools should be provided to accelerate inclusion into DevOps pipelines. • API-first: The primary communication protocol should be RESTful APIs. Exposing integrations as RESTful APIs should be trivial and based upon common conventions such as the Open API specification. Calling downstream RESTful APis should be equally trivial, including discovery via definition files. • Digital connectivity: In addition to the rich enterprise connectivity that has always been provided by integration runtimes, they must also connect to modern resources. How has the modern integration runtime changed to accommodate agile integration architecture? Integration runtimes have changed dramatically in recent years. So much so that these lightweight runtimes can be used in truly cloud- native ways. By this we are referring to their ability to hand off the burden of many of their previously proprietary mechanisms for cluster management, scaling, availability and to the cloud platform in which they are running. This entails a lot more than just running them in a containerized environment. It means they have to be able to function as “cattle not pets,” making best use of the orchestration capabilities such as Kubernetes and many other common cloud standard frameworks. We expand considerably on the concepts in “Chapter 6: Aspect 3: Cloud native integration infrastructure”. Aspect 3: Cloud-native integration infrastructure
  • 26. 26Home 26 Modern integration runtimes are well suited to the three aspects of agile integration architecture: fine-grained deployment, decentralized ownership, and true cloud-native infrastructure. Before we turn our attention to these aspects in more detail, we will take a more detailed look at the SOA pattern for those who may be less familiar with it, and explore where organizations have struggled to reach the potential they sought. For example, NoSQL databases (MongoDb and Cloudant etc.), and Messaging services such as Kafka. Furthermore, they need access to a rich catalogue of application intelligent connectors for SaaS (software as a service) applications such as Salesforce. • Continuous delivery: Continuous delivery is enabled by command-line interfaces and template scripts that mesh into standard DevOps pipeline tools. This further reduces the knowledge required to implement interfaces and increases the pace of delivery. • Enhanced tooling: Enhanced tooling for integration means most interfaces can be built by configuration alone, often by individuals with no integration background. With the addition of templates for common integration patterns, integration best practices are burned into the tooling, further simplifying the tasks. Deep integration specialists are less often required, and some integration can potentially be taken on by application teams as we will see in the next section on decentralized integration.
  • 27. 27 Section 2: Exploring agile integration architecture in detail If it makes sense to build applications in a more granular fashion, why shouldn’t we apply this idea to integration, too? We could break up the enterprise-wide centralized ESB component into smaller, more manageable, dedicated components. Perhaps even down to one integration runtime for each interface we expose, although in many cases it would be sufficient to bunch the integrations as a handful per component. Chapter 4: Aspect 1: Fine-grained integration deployment Breaking up the centralized ESB If the large centralized ESB pattern containing all the integrations for the enterprise is reducing agility for all the reasons noted previously, then why not break it up into smaller pieces? This section explores why and how we might go about doing that. Now that you have been introduced to the concept of agile integration architecture we are going to dive into greater detail on its three main aspects, looking at their characteristics and presenting a real-life scenario. - Chapter 4: Aspect 1: Fine-grained integration deployment Addresses the benefits an organization gains by breaking up the centralized ESB - Chapter 5: Aspect 2: Decentralized integration ownership Discusses how shifting from a centralized governance and development practice creates new levels of agility and innovation. - Chapter 6: Aspect 3: Cloud native integration infrastructure Provides a description of how adopting key technologies and practices from the cloud native application discipline can provide similar benefits to application integration. Home 27
  • 28. 28 The heavily centralized ESB pattern can be broken up in this way, and so can the older hub and spoke pattern. This makes each individual integration easier to change independently, and improves agility, scaling, and resilience. Figure 7 shows the result of breaking up the ESB into separate, independently maintainable and scalable components. Home 28 Figure 7: Breaking up the centralized ESB into independently maintainable and scalable pieces Fine grained integration deployment allows you to make a change to an individual integration with complete confidence that you will not introduce any instability into the environment Systems ofRecordEngagement Applications Microservice Applications Microservice Applications Externally exposed services/APIs Exposure Gateway (external) “Fine-grained integration deployment” Microservice application boundary Asynchronous integration Request/response integration Lightweight integration runtime API Gateway Lightweight language runtime Enterprise API Public API
  • 29. 29Home 29 To be able to be used for fine-grained deployment, what characteristics does a modern integration runtime need? • Fast, light integration runtime. The actual runtime is slim, dispensing with hard dependencies on other components such as databases for configuration, or being fundamentally reliant on a specific message queuing capability. The runtime itself can now be stopped and started in seconds, yet none of its rich functionality has been sacrificed. It is totally reasonable to consider deploying a small number of integrations on a runtime like this and then running them independently rather than placing all integration on a centralized single topology. Installation is equally minimalist and straightforward requiring little more than laying binaries out on a file system. • Virtualization and containerization. The runtime should actively support containerization technologies such as Docker and container orchestration capabilities such as Kubernetes, enabling non-functional characteristics such as high availability and elastic scalability to be managed in the standardized ways used by other digital generation runtimes, rather than relying on proprietary topologies and technology. This enables new runtimes to be introduced administered and scaled in well-known ways without requiring proprietary expertise. We typically call this pattern fine-grained integration deployment (and a key aspect of agile integration architecture), to differentiate it from more purist microservices application architectures. We also want to mark a distinction from the ESB term, which is strongly associated with the more cumbersome centralized integration architecture. This approach allows you to make a change to an individual integration with complete confidence that you will not introduce any instability into the environment on which the other integrations are running. You could choose to use a different version of the integration runtime, perhaps to take advantage of new features, without forcing a risky upgrade to all other integrations. You could scale up one integration completely independently of the others, making extremely efficient use of infrastructure, especially when using cloud-based models. There are of course considerations to be worked through with this approach, such as the increased complexity with more moving parts. Also, although the above could be achieved using virtual machine technology, it is likely that the long-term benefits would be greater if you were to use containers such as Docker, and orchestration mechanisms such as Kubernetes. Introducing new technologies to the integration team can add a learning curve. However, these are the same challenges that an enterprise would already be facing if they were exploring microservices architecture in other areas, so that expertise may already exist within the organization. What characteristics does the integration runtime need?
  • 30. 30 • Stateless The runtime needs to able to run statelessly. In other words, runtimes should not be dependent on, or even aware of one another. As such they can be added and taken away from a cluster freely and new versions of interfaces can be deployed easily. This enables the container orchestration to manage scaling, rolling deployments, A/B testing, canary tests and more with no proprietary knowledge of the underlying integration runtime. This stateless aspect is essential if there are going to be more runtimes to manage in total. • Cloud-first It should be possible to immediately explore a deployment without the need to install any local infrastructure. Examples include providing a cloud based managed service whereby integrations can be immediately deployed, with a low entry cost, and an elastic cost model. Quick starts should be available for simple creation of deployment environments on major cloud vendors’ infrastructures. This provides a taste of how different the integration runtimes of today are from those of the past. IBM App Connect Enterprise (formerly known as IBM Integration Bus) is a good example of such a runtime. Integration runtimes are not in themselves an ESB; ESB is just one of the patterns they can be used for. They are used in a variety of other architectural patterns too, and increasingly in fine-grained integration deployment. A glaring question then remains: how granular should the decomposition of the integration flows be? Although you could potentially separate each integration into a separate container, it is unlikely that such a purist approach would make sense. The real goal is simply to ensure that unrelated integrations are not housed together. That is, a middle ground with containers that group related integrations together (as shown in Figure 8) can be sufficient to gain many of the benefits that were described previously. Granularity Figure 8: Related integrations grouped together can lead to many benefits. Home 30
  • 31. 31 You target the integrations that need the most independence and break them out on their own. On the flip side, keep together flows that, for example, share a common data model for cross-compatibility. In a situation where changes to one integration must result in changes to all related integrations, the benefits of separation may not be so relevant. For example, where any change to a shared data model must be performed on all related integrations, and they would all need to be regression tested anyway, having them as separate entities may only be of minimal value. However, if one of those related integrations has a very different scaling profile, there might be a case for breaking it out on its own. It’s clear that there will always be a mixture of concerns to consider when assessing granularity. Fine-grained deployment allows you to reap some of the benefits of microservices architecture in your integration layer enabling greater agility because of infrastructural decoupled components, elastic scaling of individual integrations and an inherent improvement in resilience from the greater isolation. Conclusion on fine-grained integration deployment Home 31 The right level of granularity is to allow decomposition of the integration flows to the point where unrelated integrations are not housed together.
  • 32. 32 The problem While this seemed like a reasonable approach, it created issues with the application development team. Adding one element to the model took, at best, two weeks. The application team had to submit the request, then attend the CoE meeting, then if agreed to that model would be released the following week. From there, the application dev team would get the model which would contain their change (and any other change any other team had submitted for between their last version and the current version). Then would be able to start work implementing business code. After some time, these two week procedural delays began to add up. From this point we need to strongly consider if the value of the highly-governed, enterprise message model is worth that investment, and if the consistency gained through the CoE team is worth the delays. On the benefit side the CoE team can now create and maintain standards and keep a level of consistency, on the con side that consistency is incurring a penalty if we look at it from the lens of time to market. A real-life scenario The solution Let’s examine an organization where an agile methodology was adopted, a cloud had been chosen but who still had a centralized team that maintained an enterprise-wide data model and ESB. This team realized that they struggled with even a simple change of adding a new element to the enterprise message model and the associated exposed endpoint. The team that owned the model took requests from application development teams. Since it wasn’t reasonable for the modelling CoE (Center of Excellence) team to take requests constantly, they met once a week to talk about changes and determine if the changes would be agreed to. To reduce change frequency, the model was released once a week with whatever updates had been accepted by the CoE. After the model was changed the ESB team would take action on any related changes. Because of the enterprise nature of the ESB this would then again have to be coordinated with other builds, other application needs and releases. The solution was to break the data model into bounded contexts based on business focus areas. Furthermore the integrations were divided up into groups based on those bounded contexts too, each running on separate infrastructure. This allowed each data model and its associated integrations to evolve independently as required yet still providing consistency across a now more narrow bounded context. It is worth noting that although this provided improved autonomy with regard to data model changes, the integration team were still separate from the application teams, creating scheduling and requirements handover latencies. In the next section, we will discuss the importance of exploring changes to the organizational boundaries too. Lessons Learned Home 32
  • 33. 33 We can take what we’ve done in “Aspect 1: Fine grained integration deployment” a step further. If you have broken up the integrations into separate decoupled pieces, you may opt to distribute those pieces differently from an ownership and administration point of view as well. The microservices approach encourages teams to gain increasing autonomy such that they can make changes confidently at a more rapid pace. When applied to integration, that means allowing the creation and maintenance of integration artifacts to be owned directly by application teams rather than by a single separate centralized team. This distribution of ownership is often referred to under the broader topic of “decentralization” which is a common theme in microservices architecture. It is extremely important to recognize that decentralization is a significant change for most organizations. For some, it may be too different to take on board and they may have valid reasons to remain completely centrally organized. For large organizations, it is unlikely it will happen consistently across all domains. It is much more likely that only specific pockets of the organization will move to this approach - where it suits them culturally and helps them meet their business objectives. We’ll discuss what effect that shift would have on an organization, and some of the pros and cons of decentralization. In the strongly layered architecture described in “Chapter 3: The journey so far: SOA, ESBs and APIs”, technology islands such as integration had their own dedicated, and often centralized teams. Often referred to as the “ESB team” or the “SOA team”, they owned the integration infrastructure, and the creation and maintenance of everything on it. We could debate Conway’s Law as to whether the architecture created the separate team or the other way around, but the more important point is that the technology restriction of needing a single integration infrastructure has been lifted. We can now break integrations out into separate decoupled (containerized) pieces, each carrying all the dependencies they need, as demonstrated in Figure 9 below. Chapter 5: Aspect 2: Decentralized integration ownership Decentralizing integration ownership Home 33
  • 34. 34Home 34 Figure 9: Decentralizing integration to the application teams Technologically, there may be little difference between this diagram and the preceding fine-grained integration diagram in the previous chapter. All the same integrations are present, they’re just in a different place on the diagram. What’s changed is who owns the integration components. Could you have the application teams take on integration themselves? Could they own the creation and maintenance of the integrations that belong to their applications? This is feasible because not only have most integration runtimes become more lightweight, but they have also become significantly easier to use. You no longer need to be a deep integration specialist to use a good modern integration runtime. It’s perfectly reasonable that an application developer could make good use of an integration runtime. You’ll notice we’ve also shown the decentralization of the gateways to denote that the administration of the API’s exposure moves to the application teams as well. There are many potential advantages to this decentralized integration approach: • Expertise: A common challenge for separate SOA teams was that they didn’t understand the applications they were offering through services. The application teams know the data structures of their own applications better than anyone. • Optimization: Fewer teams will be involved in the end-to-end implementation of a solution, significantly reducing the cross-team chatter, project delivery timeframe, and inevitable waterfall development that typically occurs in these cases. • Empowerment: Governance teams were viewed as bottle necks or checkpoints that had to be passed. There were artificial delays that were added to document, review then approve solutions. Microservice application boundary Asynchronous integration Request/response integration Lightweight integration runtime API Gateway Lightweight language runtime Enterprise API Public API Systems ofRecordEngagement Applications Microservice Applications Externally exposed services/APIs Exposure Gateway (external)
  • 35. 35 The goal was to create consistency, the con is that to create that consistency took time. The fundamental question is “does the consistency justify the additional time?” In decentralization, the team is empowered to implement the governance policies that are appropriate to their scope. Let’s just reinforce that point we made in the introduction of this chapter. While decentralization of integration offers potential unique benefits, especially in terms of overall agility, it is a significant departure from the way many organizations are structured today. The pros and cons need to be weighted carefully, and it may be that a blended approach where only some parts of the organization take on this approach is more achievable. To re-iterate, decentralized integration is primarily an organizational change, not a technical one. But does decentralized integration imply an infrastructure change? Possibly, but not necessarily. The move toward decentralized ownership of integrations and their exposure does not necessarily imply a decentralized infrastructure. While each application team clearly could have its own gateways and container orchestration platforms, this is not a given. The important thing is that they can work autonomously. API management is very commonly implemented in this way: with a shared infrastructure (an HA pair of gateways and a single installation of the API management components), but with each application team directly administering their own APIs as if they had their own individual infrastructure. The same can be done with the integration runtimes by having a centralized container orchestration platform on which they can be deployed but giving application teams the ability to deploy their own containers independently of other teams. Does decentralized integration also mean decentralized infrastructure Home 35 Decentralized integration increases project expertise, focus and team empowerment.
  • 36. 36 In the following Figure 10, we show how in a traditional SOA architecture, people were aligned based to their technology stack. It is worth noting that this decentralized approach is particularly powerful when moving to the cloud. Integration is already implemented in a cloud-friendly way and aligned with systems of record. Integrations relating to the application have been separated out from other unrelated integrations so they can move cleanly with the application. Furthermore, container-based infrastructures, if designed using cloud-ready principles and an infrastructure-as-code approach, are much more portable to cloud and make better use of cloud-based scaling and cost models. With the integration also owned by the application team, it can be effectively packaged as part of the application itself. In short, decentralized integration significantly improves your cloud readiness. We are now a very long way from the centralized ESB pattern—indeed, the term makes no sense in relation to this fully decentralized pattern— but we’re still achieving the same intent of making application data and functions available for re-use by other applications across and even beyond the enterprise. Benefits for cloud Traditional centralized technology-based organization Home 36 Figure 10: Alignment of IT staff according to technology stack in an ESB environment. Engagement Applications Microservice Applications Externally exposed services/APIs Exposure Gateway (external) Exposure Gateway Systems ofRecord Integration Runtime Integration Runtime Microservice application boundary Asynchronous integration Request/response integration Lightweight integration runtime API Gateway Lightweight language runtime Enterprise API Public API Scope of the ESB pattern
  • 37. 37 A high level organizational chart would look something like this: • A front-end team, which would be focused on end user’s experience and focused on creating UIs. • An ESB team, which would be focused on looking at existing assets that could be provided as enterprise assets. This team would also be focused on creating the services that would support the UIs from the front-end team. • A back-end team, which would focus on the implementation of the enterprise assets surfaced through the ESB. There would be many teams here working on many different technologies. Some might be able to provide SOAP interfaces created in Java, some would provide COBOL copybooks delivered over MQ, yet others would create SOAP services exposed by the mainframe and so on. This is an organizational structure with an enterprise focus which allows a company to rationalize its assets and enforce standards across a large variety of assets. The downside of this focus is that time to market for an individual project was compromised for the good of the enterprise. A simple example of this would be a front-end team wanting to add a single new element to their screen. If that element doesn’t exist on an existing SOAP service in the ESB then the ESB team would have to get engaged. Then, predictably, this would also impact the back-end team who would also have to make a change. Now, generally speaking, the code changes at each level were simple and straightforward, so that wasn’t the problem. The problem was allocating the time for developers and testers to work on it. The project managers would have to get involved to figure out who on their teams had capacity to add the new element, and how to schedule the push into the various environments. Now, if we scale this out we also have competing priorities. Each project and each new element would have to be vetted and prioritized, and all this is what took the time. So now we are in a situation where there is a lot of overhead, in terms of time, for a very simple and straightforward change. The question is whether the benefits that we get by doing governance, and creating common interfaces is worth the price we pay for the operational challenges? In the modern digital world of fast-paced innovation we must think of a new way to enforce standards while allowing teams to reduce their time to market. We’re trying to reduce the time between the business ask and production implementation, knowing that we may rethink and reconsider how we implement the governance processes that were once in place. Let’s now consider the concept of microservices and that we’ve broken our technical assets down into smaller pieces. If we don’t consider reorganizing, we might actually make it worse! We’ll introduce even more hand-offs as the lines of what is an application and who owns what begin to blur. We need to re-think how we align people to technical assets. In Figure 11, give you a preview of what that new alignment might look like. Instead of people being centrally aligned to the area of the architecture they work on, they’ve been decentralized, and aligned to business domains. In the past, we had a front-end team, services teams, back-end teams and so on; now we have a number of business teams. For example, an Account team which works on anything related to accounts regardless whether or not the accounts involve a REST API, a microservice, or a user interface. Moving to a decentralized, business-focused team structure Home 37
  • 38. 38 The teams need to have cross-cutting skills since their goal is to deliver business results, not technology. To create that diverse skill set, it’s natural to start by picking one person from the old ESB team, one person from the old front-end team, and another from the back-end team. It is very important to note that this does not need to be a big bang re-org across the entire enterprise, this can be done application by application, and piece by piece. Home 38 The concept of “big bangs generally lead to big disasters” isn’t only applicable to code or applications. It’s applicable to organizational structure changes as well. An organization’s landscape will be a complex heterogeneous blend of new and old. It may have a “move to cloud” strategy, yet it will also contain stable heritage assets. The organizational structure will continue to reflect that mixture. Few large enterprises will have the luxury of shifting entirely to a decentralized organizational structure, nor would they be wise to do so. For example, if there is a stable application and there is nothing major on the road map for that application, it wouldn’t make sense to decompose that application into microservices. Just as that wouldn’t make sense, it also would not make sense to reorganize the team working on that application. Decentralization need only occur where the autonomy it brings is required by the organization, to enable rapid innovation in a particular area. Big bangs generally lead to big disasters Figure 11: Decentralized IT staff structures. Externally exposed services/APIs Exposure Gateway (external) Microservices application Engagement applications Systems ofRecord
  • 39. 39 Now let’s consider what this change does to an individual and what they’re concerned about. The first thing you’ll notice about the next diagram is that it shows both old and new architectural styles together. This is the reality for most organizations. There will be many existing systems that are older, more resistant to change, yet critical to the business. Whilst some of those may be partially or even completely re-engineered, or replaced, many will remain for a long time to come. In addition, there is a new wave of applications being built for agility and innovation using architectures such as microservices. There will be new cloud-based software-as-a-service applications being added to the mix too. If we look into the concerns and motivations of the people involved, they fall into two very different groups, illustrated in Figure 12. Home 39 We certainly do not anticipate reorganization at a company level in its entirety overnight. The point here is more that as the architecture evolves, so should the team structure working on those applications, and indeed the integration between them. If the architecture for an application is not changing and is not foreseen to change there is no need reorganize the people working on that application. Prioritizing Project Delivery First Re-use Quality Stability Support Monitoring Governance Performance Fixed requirements Agility Velocity Autonomy Freemium Cloud native Vendor agnostic Developer is king Rapid prototyping Short learning curve What’s its track recor Is the vendor trustworthy Will it serve me long term What do the analysts think of it Could I get sacked for a risky choice Can I start small Can it help me today What do my peers think of it Does it have an active community Are my skills relevant to my peers A developer of traditional applications cares about stability and generating code for re-use and doing a large amount of up-front due diligence. The agile teams on the other hand have shifted to a delivery focus. Now, instead of thinking about the integrity of the enterprise architecture first and being willing to compromise on the individual delivery timelines, they’re now thinking about delivery first and willing to compromise on consistency. Agile teams are more concerned with the project delivery than they are with the enterprise architecture integrity. Figure 12: Traditional developers versus a gile teams Engagement applications Traditional Integration Systems ofRecord SaaS Application Microservice applications Engagement applications Integration Runtime
  • 40. 40Home 40 Let’s view these two conflicting priorities as two ends of a pendulum. There are negatives at the extreme end on both sides. On one side, we have analysis paralysis where all we’re doing is talking and thinking about what we should be doing, on the other side we have the wild-wild-west were all we’re doing is blindly writing code with no direction or thought towards the longer-term picture. Neither side is correct, and both have grave consequences if allowed to slip too far to one extreme or the other. The question still remains: “If I’ve broken my teams into business domains and they’re enabled and focused on delivery, how do I get some level of consistency across all the teams? How do I prevent duplicate effort? How do I gain some semblance of consistency and control while still enabling speed to production?” The answer is to also consider the architecture role. In the SOA model the architecture team would sit in an ivory tower and make decisions. In the new world, the architects have an evolved role--practicing architects. An example is depicted in Figure 13. Evolving the role of the Architect Microservice application Microservice component Microservice component Guild(s) Here we have many teams and some of the members of those teams are playing a dual role. On one side they are expected to be an individual contributor on the team, and on the other side they sit on a committee (or guild) that rationalizes what everyone is working on. They are creating common best practices from their work on the ground. They are creating shared frameworks, and sharing their experiences so that other teams don’t blunder into traps they’ve already encountered. In the SOA world, it was the goal to stop duplication/enforce standards before development even started. In this model the teams are empowered, and the committee or guild’s responsibility is to raise/address and fix cross cutting concerns at the time of application development. If there is a downside to decentralization, it may be the question of how to govern the multitude of different ways that each application team might use the technology – essentially encouraging standard patterns of use and best practices. Autonomy can lead to divergence. Figure 13: Practicing architects play a dual role as individual contributors and guild members.
  • 41. 41Home 41 If every application team creates APIs in their own style and convention, it can become complex for consumers who want to re-use those APIs. With SOA, attempts were made to create rigid standards for every aspect of how the SOAP protocol would be used, which inevitably made them harder to understand and reduced adoption. With RESTful APIs, it is more common to see convergence on conventions rather than hard standards. Either way, the need is clear: Even in decentralized environments, you still need to find ways to ensure an appropriate level of commonality across the enterprise. Of course, if you are already exploring a microservices-based approach elsewhere in your enterprise, then you will be familiar with the challenges of autonomy. Therefore, the practicing architect is now responsible for knowing and understanding what the committee has agreed to, encouraging their team to follow the governance guidelines, bringing up cross-cutting concerns that their team has identified, and sharing what they’re working on. This role also has the need to be an individual contributor on one of the teams so that they feel the pain, or benefit, of the decisions made by the committee. The practicing architect is now responsible for execution of the individual team mission as well as the related governance requirements that cut across the organization. With the concept of decentralization comes a natural skepticism over whether the committee or guild’s influence will be persuasive enough to enforce the standards they’ve agreed to. Embedding our “practicing architect” into the team may not be enough. Let’s consider how the traditional governance cycle often occurs. It often involves the application team working through complex standards documents, and having meetings with the governance board prior to the intended implementation of the application to establish agreement. Then the application team would proceed to development activities, normally beyond the eyes of the governance team. On or near completion, and close to the agreed production date, a governance review would occur. Enforcing governance in a decentralized structure Inevitably the proposed project architecture and the actual resultant project architecture will be different, and at times, radically different. Where the architecture review board had an objection, there would almost certainly not be time to resolve it. With the exception of extreme issues (such as a critical security flaw), the production date typically goes ahead, and the technical debt is added to an ever-growing backlog. Clearly the shift we’ve discussed of placing practicing architects in the teams encourages alignment. However, the architect is now under project delivery pressure which may mean they fall into the same trap as the teams originally did, sacrificing alignment to hit deadlines. What more can we do, via the practicing architect role, to encourage enforcement of standards? The key ingredient for success in modern agile development environment is automation: automated build pipelines, automated testing, automated deployment and more. The practicing architect needs to be actively involved in ways to automate the governance.
  • 42. 42Home 42 This could be anything from automated code review, to templates for build pipelines, to standard Helm charts to ensure the target deployment topologies are homogeneous even though they are independent. In short, the focus is on enforcement of standards through frameworks, templates and automation, rather than through complex documents, and review processes. While this idea of getting the technology to enforce the standards is far from new, the proliferation of open standards in the DevOps tool chain and cloud platforms in general is making it much more achievable. Let’s start with an example: say that you have microservices components that issue HTTP requests. For every HTTP request, you would like to log in a common format how long that HTTP transaction took as well as the HTTP response code. Now, if every microservice did this differently, there wouldn’t be a unified way of looking at all traffic. Another role of the practicing architect is to build helper artifacts that would then be used by the microservices. In this way, instead of the governance process being a gate, it is an accelerator through the architects being embedded in the teams, working on code alongside of them. Now the governance cycle is being done with the teams, and instead of reviewing documents, the code is the document and the checkpoint is to make sure that the common code is being used. Another dimension to note is that not all teams are created equally. Some teams are cranking out code like a factory, others are thinking ahead to upcoming challenges, and some teams are a mix of the two. An advanced team that succeeds in finding a way to automate a particular governance challenge will be much more successful evangelists for that mechanism than any attempt for it to be created by a separate governance team. As we are discussing the technical architect it may seem that too much is being put on their shoulders. They are responsible for application delivery, they are responsible to be a part of the committee discussed in the previous section, and now we are adding on an additional element of writing common code that is to be used by other application development teams. Is it too much? A common way to offload some of that work is to create a dedicated team that is under the direction of the practicing architect who is writing and testing this code. The authoring of the code isn’t a huge challenge, but the testing of that common code is. The reason for placing a high value on testing is because of the potential impact to break or introduce bugs into all the applications that use that code. For this reason, extra due diligence and care must be taken, justifying the investment in the additional resource allocation. Clearly our aim should be to ensure that general developers in the application teams can focus on writing code that delivers business value. With the architects writing or overseeing common components which naturally enforce the governance concerns, the application teams can spend more of their time on value, and less in governance sessions. Governance based on complex documentation and heavy review procedures are rarely adhered to consistently, whereas inline tooling based standardization happens more naturally.
  • 43. 43Home 43 The next and very critical person to consider is the developer. Developers are now to be expected and encouraged to be a full stack developer and solve the business problem with whatever technology is required. This puts an incredible strain on each individual developer in terms of the skills that they must acquire. It’s not possible for the developer to know the deep ins and outs of every aspect of each technology, so something has to give. As we’ll see, what gives is the infrastructure learning curve – we are finding better and better ways to make infrastructural concerns look the same from one product to another. In the pre-cloud days, developers had to learn multiple aspects of each technology as categorized in Figure 14. How can we have multi-skilled developers? Operations Deployment Build Creation Security Installation Resource allocation Operations Deployment Build Creation Security Installation Resource allocation Operation eployment Build Creation Security Installation Resource allocation Operations Deploymen uild Creation Security Installation Resource allocation Operations Deployment Build Creation Security Installation Resource allocation Figure 14: Required pre-cloud technology skills. Decentralization allows developers to focus on what their team is responsible for; delivering business results by creating artifacts. s D t B Artifact Artifact Artifact Artifact Artifact
  • 44. 44Home 44 Operations Deployment Build Artefact Creation Security Installation Resource allocation One day, in an ideal world, the only unique thing about using a technology will be the creation of the artifact such as the code, or in the case of integration, the mediation flows and data maps. Everything else will come from the environment. We’ll discuss this infrastructural change in more depth in the next chapter. Each column represents a technology and each row represents an area that the developer had to know and care about, and understand the implications of their code on. They had to know individually for each technology how to install, how much resources it would need allocated to it, how to cater for high availability, scaling and security. How to create the artifacts, how to compile and build them, where to store them, how to deploy them, and how to monitor them at runtime. All this unique and specific to each technology. It is no wonder that we had technology specific teams! However, the common capabilities and frameworks of typical cloud platforms now attempt to take care of many of those concerns in a standardized way. They allow the developer to focus on what their team is responsible for, delivering business results by creating artifacts! Figure 15 shows how decentralization removes the ‘white noise’. The grey area represents areas that still need to be addressed but are now no longer at the front of the developer’s mind. Standardized technology such as (Docker) containers, and orchestration frameworks such as Kubernetes, and routing frameworks such as Istio, enable management of runtimes in terms of scaling, high availability, deployment and so on. Furthermore, standardization in the way products present themselves via command line interfaces, APIs, and simple file system-based install and deployment mean that standard tools can be used to install, build and deploy, too. Figure 15: Required pre-cloud technology skills.
  • 45. 45Home 45 Of course, decentralization isn’t right for every situation. It may work for some organizations, or for some parts of some organizations but not for others. Application teams for older applications may not have the right skill sets to take on the integration work. It may be that integration specialists need to be seeded into their team. This approach is a tool for potentially creating greater agility for change and scaling, but what if the application has been largely frozen for some time? At the end of the day, some organizations will find it more manageable to retain a more centralized integration team. The approach should be applied where the benefits are needed most. That said, this style of decentralized integration is what many organizations and indeed application teams have always wanted to do, but they may have had to overcome certain technological barriers first. The core concept is to focus on delivering business value and a shift from a focus on the enterprise to a focus on the developer. This concept has in part manifested itself by the movement from centralized teams into more business specific ones, but also by more subtle changes such as the role of a practicing architect. This concept is also rooted in actual technology improvements that are taking concerns away from the developer and doing those uniformly through the facilities of the cloud platform. As ever, we can refer right back to Conway’s Law (circa 1967) - if we’re changing the way we architect systems and we want it to stick, we also need to change the organizational structure. Conclusions on decentralized integration ownership
  • 46. 46 The problem The main problem was lack of end state vision. Because each piece of work was taken independently teams often did the minimum amount of work to accomplish the business objective. The main motivators for each team were risk avoidance and drive to meet project deadlines – and a desire not to break any existing functionality. Since each team had little experience with the code they needed to change, they began making tactical decisions to lower risk. Developers were afraid to break currently working functionality. As they began new work, they would work around code that was authored from another team. Therefore, all new code was appended to existing code. The microservices continued growing and growing over time, which then resulted in the microservices not being so micro. This lead to technical debt piling up. This technical debt was not apparent over the first few releases, but then, 5 or 6 releases in, this became a real problem. The next release required the investment of unravelling past tactical decisions. Over time the re-hashing of previously made decisions outweighed the agility that this organization structure had originally produced. Home 46 A real-life scenario An organization who committed to decentralization was working with a microservices architecture that had now been widely adopted, and many small, independent assets were created at a rapid pace. In addition to that, the infrastructure had migrated over to a Docker-based environment. The organization didn’t believe they needed to align their developers with specific technical assets. The original thought was that any team could work on any technical component. If the feature required a team to add an element onto an existing screen, that team was empowered and had free range to modify whatever assets were needed to to accomplish the business goal. There was a level of coordination that occurred before the feature was worked on so that no two teams would be working on the same code at the same time. This avoided the need for merging of code. In the beginning, for the first 4-5 releases, this worked out beautifully. Teams could work independently and could move quickly. However, over time problems started to arise. Lessons Learned The solution The solution was to align teams to microservices components, and create clear delineation of responsibilities. These needed to be done through a rational approach. The first step was to break down the entire solution into bounded contexts, then assign teams ownership over those bounded context. A bounded context is simply a business objective and a grouping of business functions. An individual team could own many microservices components, however those assets all had to be aligned to the same business objective. Clear lines of ownership and responsibility meant that the team thought more strategically about code modifications. The gravity of creating good regression tests was now much more important since each team knew they would have to live with their past decisions. Importantly, another dimension of these new ownership lines meant less handoffs between teams to accomplish a business objective. One team would own the business function from start to finish - they would modify the front-end code, the integration layer and the back-end code, including the storage. This grouping of assets is clearly defined in microservices architecture, and that principle should also carry through to organization structures to reduce the handoffs between teams and increase operational efficiency.
  • 47. 47 If we are to be truly affective in transitioning to an agile integration architecture, we will need to do more than simply break out the integrations into separate containers. We also need to apply a cloud native - “cattle not pets” - approach to the design and configuration of our integrations. As a result of moving to a fully cloud native approach, integration then becomes just another option in the toolbox of lightweight runtimes available to people building microservices based applications. Instead of just using integration to connect applications together, it can now also be used within applications where a component performs an integration centric task. Times have changed. Hardware is virtualized. Also, with container technologies, such as Docker, you can reduce the surrounding operating system to a minimum so that you can start an isolated process in seconds at most. Using cloud-based infrastructure, scaling can be horizontal, adding and removing servers or containers at will, and adopting a usage-based pricing model. With that freedom, you can now deploy thin slivers of application logic on minimalist runtimes into lightweight independent containers. Running significantly more than just a pair of containers is common and limits the effects of one container going down. By using container orchestration frameworks, such as Kubernetes, you can introduce or dispose of containers rapidly to scale workloads up and down. These containers are treated more like a herd of cattle. Let take a brief look at where that concept came from before we discuss how to apply it in the integration space. In a time when servers took weeks to provision and minutes to start, it was fashionable to boast about how long you could keep your servers running without failure. Hardware was expensive, and the more applications you could pack onto a server, the lower your running costs were. High availability (HA) was handled by using pairs of servers, and scaling was vertical by adding more cores to a machine. Each server was unique, precious, and treated, well, like a pet. Let’s examine what the common “pets” model looks like. In the analogy, if you view a server (or a pair of servers that attempt to appear as a single unit) as indispensable, it is a pet. In the context of integration, this concept is similar to the centralized integration topologies that the traditional approach has used to solve enterprise application integration (EAI) and service-oriented architecture use cases. Chapter 6: Aspect 3: Cloud native integration infrastructure Integration pets: The traditional approach Cattle not pets Home 47 Using cloud-based infrastructure provides freedom to deploy thin slivers of application logic on minimalist runtimes into lightweight independent containers.