Microservices are driving a major renaissance in application development. But their use also requires IT organizations to rethink the flow of data and processes across multiple microservices.
Microservices are being rapidly adopted as a means of building and modernizing various classes of distributed applications in a way that makes them more scalable, flexible, resilient, and yet still comparatively simple to build. The basic idea is that instead of building self-contained, monolithic applications, a microservices approach decomposes an application into a modular set of independent components that can be dynamically integrated with one another via application programming interfaces (APIs).
Collectively, microservices are driving a major renaissance in application development. Applications can be constructed by connecting a series of reusable components that have been individually tested and hardened. But this, in turn, also requires IT organizations to rethink the flow of data and processes across multiple microservices.
Data and process workflows exist both inside and across applications. Although in traditional monolithic application architectures those workflows can be explicit, with well-defined protocols and interfaces, most of those workflows are implicitly embedded inside the application without any formal specifications. Microservices disconnect those implicit workflows, externalizing what previously was buried inside the application. That makes it necessary to identify and reconstruct those workflows between microservices components.
Specifically, IT teams need to:
- Identify workflows in existing applications.
- Map those workflows to planned microservices as well as identify any new workflows.
- Identify the data flows that correspond to these workflows.
- Determine the requirements needed to support those data flows (throughput, latency, etc.).
- Choose the supporting technology needed to support those data flows and requirements.
The choice of technology to support data flows and their requirements is critical to being successful with microservices. In microservices-based applications, the pipelines through which data moves not only need to be carefully constructed, the rate at which data is moving through them needs to be continually monitored as well. Workflows and data flows are intimately bound together, with data providing the control information and metadata that direct the workflows as well as the data payload on which individual services operate.
The challenge this creates is a need to identify and specify not only the order and components of workflows, but also the design of all the associated data flows. An IT team needs to identify workflows that exist in current applications by mapping process and data flows through those applications; and, if appropriate, map them into the microservices being created. They need to make certain robust interfaces get defined anywhere those workflows cross microservice boundaries.
Microservices applications multiply both the number of connections that need to be made and the amount of data that needs to be transferred. All that needs to be accomplished without compromising performance. That makes it critical to address the throughput, scalability, resiliency, and latency requirements of both individual microservices and the application as a whole.
However, the traditional approaches used for monolithic applications simply are not up to the job. For example, monolithic applications are most commonly designed for the assumption that data would be consumed in scheduled batches. In contrast, data needed by microservices can arrive at any time. It’s also worth noting that ensuring data protection and resiliency needs to be approached differently. The overnight backup approach common to monolithic applications is too cumbersome in a microservices-based application, while the replication tools employed with traditional applications simply add too much overhead.
It also doesn’t work to assume that each microservice can provide all of the capabilities needed on its own. Burdening each microservice with the additional, often redundant logic and features to support data movement, data protection, resiliency, flow control, and more makes microservices difficult to develop and painful to support. Third-party tools for connecting and handling data such as messaging or queuing technologies create headaches of their own in a microservices environment because they struggle to meet performance and scalability requirements, leading to a patchwork of disconnected islands connecting different subsets of microservices.
Creating and managing data flows to support microservices requires nothing less than a unified data platform. Using such a platform for connecting microservices makes it possible to offload, for example, flow control, resource management, and data durability guarantees, from the microservices themselves, making them easier to build, maintain and scale.
Such a modern data platform needs to meet the following requirements:
- Scalable connectivity: Microservices explode the number of connections between services. Platforms employed to connect microservices need to be highly scalable.
- Resiliency: Connections between microservices need to handle any number of possible failure scenarios. Availability requirements are higher because of the multiple connections needed to implement workflows.
- Simplicity: Microservices are intended to be as simple and lightweight as possible.
We at Streamlio have taken all those requirements to heart and have built a unified data platform, based on proven technologies, that meets those requirements. The Streamlio solution is built to take advantage of modern infrastructure technologies such as containers, Mesos, Kubernetes, and Nomad. On top of that base resides our core open source technologies – Apache Pulsar for messaging, Apache Heron for processing, and Apache BookKeeper stream storage. Our solution takes technologies built for performance and scale that run in production at companies including Twitter and Yahoo and makes that available to any enterprise.
We invite you to visit https://streaml.io to learn more about our solution. We’re confident you’ll appreciate not only the flexibility but also the capabilities, scale and performance of having a platform designed from the ground up with microservices in mind.