When addressing business change and disruption, these four trends can transform how organizations operate, communicate, and make crucial business choices.
Regardless of industry, change and disruption have become the new norm. From the use of hyper-automation and applied observability to an event-driven API economy – these technologies are enabling business to cope with disruption and harness data to drive better efficiencies and customer experiences. And as we move through 2023, change and disruption have given rise to four emerging data-driven technologies.
1. Go the extra mile: Intelligent automation and hyper-automation put companies on the path to success
Gartner and IDC identify both hyper and intelligent automation in their yearly technology trends for 2023. Hyper automation moves automation up a level, adding more intelligence to automation and using a broader set of tools so that previously un-automatable tasks can be automated. Hyper automation initiatives come in many different shapes and sizes and are being seen across a wide range of industries, from banking and insurance to manufacturing and healthcare.
Gartner points to U.S. healthcare company CVS Health as a prime example, having taken advantage of hyper-automation to simplify its unwieldy benefits administration processes to improve efficiency, accuracy, and customer service. A new system was developed to streamline tasks from application receipts and payments to issue resolution. These were cross-functional, largely manual, and time-consuming tasks beforehand, which involved analyzing data in a wide range of formats and aligning with complex coding rules. However, using a combination of AI, RPA, machine learning, data analytics, and natural language processes (NLP), the company was able to automate much of this work.
As consumers of goods and services continue to demand faster and better customer service, companies need to overlap development cycles and find ways to cost-reduce portions of what they deliver to stay ahead of the curve. Removing the friction that slows down implementation teams is key to success with end-customers.
Give automation the best start possible: EDA can help
But hyper-automation requires an organization to be underpinned by an event-driven architecture (EDA) and – more specifically, an event mesh – to maximize success. An event mesh is an interconnected network of event brokers that allows data to be pushed in real time to parts of the organization where the data is needed. It does this dynamically, meaning that new event types can be added at any time, and interest in events can be registered, allowing a seamless interchange of data for the applications that are interested in using it.
Take the aviation industry as an example. An event mesh streams information such as flight routes, delays, cancellations, and mileage accruals between applications, connected devices, and people anywhere in the world, instantly. With an event mesh, information about events can be continuously streamed to multiple systems and filtered so that each system only receives the data it needs. This ultimately enhances the customer experience as passengers, pilots, and crews are notified in real time when something relevant occurs across on each and every flight.
Or take the working example of Heathrow Airport. After downsizing its IT department during the pandemic, it needed to reduce its dependency on IT to deliver solutions. The answer was to pivot the department to the role of orchestrator, building a low-code/no-code community of practice that enabled employees across the wider business to build in their own automation for health and safety apps to support a safe return to work or live audits. As of last August, Heathrow’s hyper-automation efforts had garnered huge savings in potential outsourcing costs, reduced paperwork by 120,000 pages, and decreased manual data entry hours by more than 1,170.
See also: On-Ramp to EDA: Basics and Benefits
2. Leave no stone unturned: Applied observability puts granularity to new levels
Observability has grown up, breaking out from its infant stages as a tech-focused term to something that organizations realize will provide the key to keeping track of key data events in an increasingly de-coupled business world spanning systems architecture to the business operations it supports.
Moving forward, we now have the next level, “Applied Observability,” recognized by Gartner as a key 2023 strategic tech trend at its most recent Orlando Symposium.
Applied Observability enables organizations to exploit their data artifacts for competitive advantage. That means being able to ensure that the right data is delivered at the right time for rapid action based on confirmed stakeholder actions rather than intentions. “Observable” data include key digitized artifacts, such as logs, traces, API calls, dwell time, downloads, and file transfers, that appear when any stakeholder takes any kind of action. Applied Observability feeds these observable artifacts back in a highly orchestrated and integrated approach to accelerate organizational decision-making and allows the business owner to track how long an action took to fully process at every point in the workflow. By having this visibility into end-to-end workflows, businesses can gain better visibility into how long their systems took to process a workflow and get real-time insights into bottlenecks or areas for improvement to ensure their systems function optimally.
EDA takes observability one step further
But unearthing these insights requires an event-driven approach to software architecture. Distributed tracing, for example, is a method of tracking application requests as they flow from front-end devices to back-end services and databases. By definition, and in contrast to traditional tracing, distributed tracing can be visualized to show a searchable, graphical picture of when, where, and how a single event flowed through an enterprise, regardless of the number of hops the workflow took to fully process.
Consider the potential traceability and end-to-end observability benefits across a payment ecosystem underpinned by event-driven architecture. Embedding distributed tracing into an event mesh emits trace events in OpenTelemetry format so banks can collect, visualize and analyze them in any compatible tool. This empowers them to not only confirm that a given message was published but easily understand exactly when and by whom, where it went, down to individual hops, who received it and when…or why not.
When planned strategically and executed successfully, Applied Observability is arguably the most powerful source of data-driven decision-making. And, since OpenTelemetry is a well-defined open standard, it can be implemented across both synchronous and asynchronous workflows.
3. The Metaverse will continue to haze the lines between physical and digital
Zooming out a bit, there is no getting away from the increasing prevalence of the Metaverse. It is clear to see its potential to reshape everything from consumer habits to public services to health, welfare, and education from an emerging digital universe where providers, creators, and consumers can experience a parallel life to their real-world existence. Driven by technologies, including digital twins, augmented reality, and virtual reality, McKinsey estimates the Metaverse will be worth $5 trillion by 2030.
The Metaverse helps organizations operate in real time like never before
Data in real time will be the vital common denominator to link the digital and physical divide and optimize the Metaverse. McKinsey lists real-time data as a vital element to facilitate the adoption of the Metaverse in several key industries. These include Retail – to enhance shopping/in-store/product experience, capture efficiencies, and explore net-new revenue streams; Banking/Financial Services – to support Decentralized Finance structures; Transportation – to allow central coordination and project management (e.g., via IoT/digital twins), especially in logistics, with real-time data collection for optimization; and Healthcare – enabling fully-personalized health consultations, with access to real-time data.
The Metaverse, by definition, cannot be static. It needs to be in real time, in motion. It needs an event-driven architecture and a real-time event mesh to support it.
For example, if an avatar visits a retailer in the Metaverse, the retailer will need to keep track of countless user actions, such as how long someone spent looking at a particular product, which products got the most interest, etc. The Metaverse will produce a significant amount of data that companies will be interested in analyzing in real time. This data can be used to make decisions from real-time adjustments to inventory to incentivizing commerce through real-time offers/discounts/coupons and even dynamically pricing products.
4. The acceleration of the event-driven API Economy
The API economy has exploded with the proliferation of web applications and with the rise of digital businesses that need to expose and connect applications and assets using familiar architectural patterns and protocols such as HTTP/ Representational State Transfer (REST). Research from MIT Initiative on the Digital Economy has shown that over a four-year period, businesses using APIs saw 12.7% more growth in market capitalization compared to those that did not adopt APIs.
But the API economy is changing, as event-driven architecture and asynchronous event-driven APIs are becoming increasingly important to companies that are striving to make their business processes, customer interactions, and supply chains more real-time. Using both synchronous and asynchronous methods can result in an application environment in which system resources are most effectively used.
Blurring the lines of synchronous and asynchronous – & create a one-stop shop
AsyncAPI and RESTful APIs will be the next evolution of the API Economy. API Management Platforms will need to quickly adapt to this new reality. As a prime example, Forrester recently spotlighted a global biotech company that is blazing a trail in the API-driven area. As the case study demonstrates, the organization understood that EDA is a complement to RESTful APIs – another tool in their digital business toolbox, filling in the gaps of a REST-only approach.
As a result, they planned a unified event + REST approach from the outset. Both APIs and events are managed as enabling digital products, with the governance and lifecycle of both unified as one process. A platform team builds a unified API + event platform for app dev teams rather than building each as a separate siloed platform. The results include faster speed to market and new business opportunities. The company also embraces FAIR data principles: data that is Findable, Accessible, Interoperable, and Reusable. A digital marketplace consisting of both events and APIs makes the data findable and machine-readable via Open API Spec (OAS) and AsyncAPI metadata. Data can then be accessed on demand via APIs, and events push data in real time.
From this example, it can be clearly seen that modern enterprises realistically require a “one-stop shop” platform to best suit their unique application integration strategies – one that can deliver unified management to access, reuse, and exposing both asynchronous event-driven and synchronous RESTful APIs.
2023: The year for driving change and efficiencies with data-led technology
When combined, these four trends have the possibility to transform how organizations operate, communicate, and make crucial business choices – and they all share one common entity: the need for a 360 visibility of the movement of data, no matter the time. All roads lead to event-driven architecture. It will help organizations control the flow of events in motion throughout the whole enterprise.