As the need for real-time data expands, organizations will need to operate all three of these meshes at internet scale with enterprise-grade capabilities.
The way data moves across an enterprise can fundamentally impact how the business operates, innovates, and navigates change.
And with large-scale disruption (COVID-19, climate change, new business models, etc.) seemingly the new normal, a businesses’ ability to successfully navigate change can be the difference between leading a market or exiting it.
That’s probably why modern data movement patterns, architecture, and infrastructure have become such hot topics in the enterprise IT space of late.
‘Data Mesh,’ ‘Event Mesh,’ and ‘Service Mesh’ represent three such ‘hot topics.’ In short, they are architectural paradigms that describe ways to move data and enable communication between applications.
To explore this area further and in a way that would be accessible to audiences that are relatively new to these concepts, we recently sat down with Denis King, President and CEO of Solace. We talked about the drivers behind the ‘mesh’ paradigms, their comparative advantages, their synergies, and technologies that enterprises are using to implement them. Below is a summary of our conversation.
RTInsights: Let’s start with a quick primer. What are data mesh, event mesh, and service mesh? And where do these concepts come from?
King: Let me start with a bit of the context. Digital transformations, which have accelerated faster than we’ve ever seen in history, are creating a need for organizations, specifically large enterprises, to become more real-time, agile, and flexible. But most importantly, they’re looking to scale to an extent they’ve never had to scale to before. This need for scale is driving the need for a rethink of the infrastructure. That’s where data mesh, service mesh, and event mesh come into play.
Data mesh was originated and proposed by Zhamak Dehghani at ThoughtWorks. It is a set of principles that fixes a lot of the challenges that we’ve seen in big data lakes. Data lakes brought the idea of centralizing all data into one location for analytics. But as organizations became global and distributed by nature, this idea didn’t scale very well. A data mesh is the decentralization or decentralizing of ownership of domain data with self-service access to that data by having a common governance, set of rules, and policies on that data.
Service mesh is quite different, but it requires a similar type of scale. Service mesh was born out of the internet companies like Twitter and Netflix that were trying to scale. They had these large, monolithic applications that were point-to-point requesting and replying from the various systems of records and databases. To scale, they had to break up these monoliths into smaller things that we call microservices. As more and more microservices were deployed globally, there was a need to control and manage all the point-to-point communication between these various services. That’s what the service mesh infrastructure does.
This is what brings me now to event mesh. While service mesh is all about point-to-point communication, event mesh handles point-to-multi-point scenarios, or what we call event streaming. If you think about a large enterprise today, everything needs to be real time. There are events all around the enterprises, from a change in price data, a change in a purchase order, or a sensor going off because of a high temperature reading. These are all events, and these events need to be streamed to other applications in real time. The event mesh is a distributed set of event brokers that are deployed in a multi-cloud, hybrid cloud fashion. They create this mesh that ensures that the right data is streamed to the right application in real time.
RTInsights: Why are these meshes important to businesses today. What’s going on in the market that’s driving the need for them?
King: If you think about data, first and foremost, data is accelerating at a pace we’ve never seen before. Enterprises are experiencing something like a 63% growth in data per month. This data is changing every second, every millisecond, every nanosecond. And organizations are becoming more distributed. We’ve seen the push toward hybrid multi-cloud, the globalization of industries, so applications, devices, and everything is more distributed than we’ve ever seen before. That’s driving a need for connectivity.
Another thing that’s driving the change is the human factor. Everybody wants information faster. Everybody wants to feel that customer experience faster than they’ve ever before. Not a single event, not a person, not a thing doesn’t want their data faster than they had it yesterday.
In both cases, the service mesh, event mesh, and data mesh provide the infrastructure capabilities that will help address the changes that we’ve been seeing in the enterprise over the past several years.
RTInsights: How do these meshes potentially work together and complement one another?
King: There’s a need for all three of these to work in harmony within an enterprise. Specifically, service mesh and event mesh are parallel technologies. Service mesh handles all the point-to-point traffic and all the synchronous traffic, whereas event mesh handles all the point-to-multi-point or asynchronous traffic.
I’ll give you an example. If you were booking a ticket to an airline online, you go online, search, lock in you’re booking, lock in your seat, and pay for that ticket. That’s all synchronous behavior. Each one of those things has to happen one after the other, in a synchronous way, so you don’t lose that seat. However, there are many downstream systems that take of things like your meal, baggage, and loyalty program. All of these things can happen asynchronously.
The synchronous part is serviced by the service mesh, whereas the asynchronous parts are handled by the event mesh. If you don’t have these both working in harmony, what tends to happen is your booking could be blocked by some issue with a downstream app, like a loyalty program. So, it’s really important, in order to scale and provide the best customer experience, that you have service mesh and event mesh happening together in harmony.
The data mesh is more of a set of principles. As we move more toward a distributed set of data and a distributed set of domain owners of that data, we need ways to allow for changes in that data to be alerted and pushed to other data sets that are distributed around the world. This is where an event mesh, as a centralized component, can help be the backbone for the changes in that data. That’s something that we’re seeing in the industry is how service mesh and event mesh, or parallel technologies, but event is really becoming the backbone of how a data mesh actually operates.
RTInsights: What challenges do enterprise IT teams face as they work to implement these meshes?
King: While there’s a maturity happening in the market, three of the bigger challenges with these three technologies have to do with interoperability, governance, and tooling.
Interoperability is about open standards, but it’s also about enterprises fixating on one of these as their solution. They say I’m a data mesh. I’m building a data mesh, or I’m building a service mesh, and the reality is they need all three. And they need to think about all three of them as working in harmony.
With regard to governance, it’s difficult to have an event mesh, service mesh, and data mesh working together because you can’t buy all these as one product. It’s difficult to think of a governance model that spans all three technology stacks. That’s a bit of a challenge, but it’s also a maturity in the market. I think we’ll see that happening over time.
The last one is around tooling. Cloud has made these concepts simpler to deploy. And with the globalization of organizations, they have run-times deployed all around the world. Under these conditions, having one place where you can manage all three of these (event, service, and data meshes) is a big challenge. There needs to be more maturity in the tooling across all three of these areas to make it work at scale for a large enterprise.
RTInsights: How is Solace working with enterprises to help them overcome these challenges and leverage an event mesh?
King: Solace provides the Solace PubSub+ Platform that supports open standards natively. It supports the connectivity of different cloud-native services, business applications, devices, and things. More importantly, it exposes a RESTful interface that integrates seamlessly with a service mesh. An organization that’s using an event mesh as an integral part for connecting up their enterprise can have that mesh co-exist side-by-side with a service mesh and have complete interoperability.
The second area where Solace focuses on is governance. One of the reasons why event streaming or event-driven architecture wasn’t pervasive five or six years ago in mainstream enterprises is because there was no concept of the ability to discover, catalog, and govern event streams that flow freely throughout an organization. You want to liberate that data and those changes, but you need the tooling and governance model to ensure the right data goes to the right application. You need to satisfy all the privacy rules to be able to do that. That’s really a big part of the event management and event portal strategy for Solace.
The last thing that Solace provides is a simplicity to using an event-driven architecture. An event mesh is made up of distributed event brokers. You need to be able to lifecycle manage these brokers and have the governance all through a single pane of glass. In that way, an enterprise can truly realize event streams and event-driven architecture in their enterprise. That is something that the PubSub+ Platform can provide from a tooling point of view.
Our efforts focus on open standards, governance, simplicity, and a single pane of glass for the enterprise.
RTInsights: What does the future look like regarding the movement and processing of data around an organization?
King: Meshes are not going anywhere. In fact, I think meshes will become mainstream across all major enterprises and not just one mesh. We talked about point-to-point communication with service mesh. We talked point-to-multi-point with event mesh. And we talked about the principles of data mesh. All three of these will have to operate and co-exist in harmony as a foundational element for your digital transformation. If we fast forward into the future, that will be common for all enterprises.
More importantly, these meshes need to communicate and be in sync with the integration strategy and the API strategy of an organization. In order for the enterprise to scale, they need to co-exist with integration technologies that talk to the business applications. And they need to be a part of the whole API economy that becomes the glue layer to the application.
The point is that the need for real-time data will continue to expand. The changes in data will continue to expand. And we will need to operate all three of these meshes at internet scale with enterprise-grade capabilities. I believe that, ultimately, will be the future if we fast forward a few years from now.