The decoupled and asynchronous nature of an event-driven architecture enables the development of flexible, extensible, modern cloud-based serverless applications.
Event-driven implementation patterns can be used by architects to create modern multicloud serverless applications. In this article, we will take a quick look at some of them.
Event stream processing
Event producers publish events, ordered by time of creation, forming an event stream. The stream can be distributed across the enterprise, and consumers may subscribe to the various event streams and make use of all or only the events to which they’re interested. Events can last for a configurable time to allow late consumers to come on board or for consumers to reprocess events from an event stream during recovery.
Developers can make use of event streams in a couple of ways. They can perform stream processing, which is the continuous processing of an event stream (usually in real-time and focused on a defined time window). They also can perform streaming analytics, which is a type of event stream processing that leverages machine learning to detect patterns, trigger actions, or produce other events.
Broadcast and pipelines
Events can be broadcast to multiple consumers simultaneously. Publish-subscribe platforms, like Kafka, support a large number of consumers for the same event stream. Events also can be distributed across large networks and interconnected clouds to subscribers that are interested in those events. Such capabilities are useful for notifications and data replication.
In contrast to broadcasting notifications, pipelines offer a different processing model. That model is based on functional processing stages, where messages are processed, enriched, and re-published. Unlike a typical REST synchronous sequence of calls, the processing stages are connected via asynchronous distributed events offering easier reconfigurability. The persistence of the intermediary events and the decoupling of the producers and consumers makes an asynchronous event processing pipeline more fault-tolerant than the equivalent REST-based integration solution.
Event sourcing
Event sourcing considers the event stream to be the source of truth: events are created for every state change of the relevant entities and recorded in the order that they are produced.
The resulting event stream provides a view of the system’s current state and the history of state changes. An event management infrastructure can support replaying the events for auditing, debugging, recovery, and more. Possible use cases are mirroring the data or creating new materialized views.
In an event sourcing architecture, separate components may be tasked with keeping a current view of the system’s entities for use as a cache or to speed up queries. These copies may lag behind the event stream, so the event stream itself is the source of truth, not the copies. It is much different from the typical architecture where one or more databases are the source of truth for the entities, and the events serve only as notifications.
Change data capture (CDC)
Databases log all committed data changes such as inserts, updates, and deletes. The CDC pattern uses this change log for event sourcing. CDC also includes schema changes so consumers can understand the meaning of the changed data and adapt as the source database is extended or changed.
CDC is a low-impact and low-latency method of creating an event stream from an existing system, with either relational or document databases.
Asynchronous request and response
Events can be leveraged to support a request/response interaction pattern, where events are created and processed asynchronously, ensuring better decoupling between the parties. Typically, a correlation ID is used to relate the events in the request/response interaction. The producer publishes an event with a correlation ID. One or more consumers then process the event asynchronously and create a response event with the Correlation ID that is routed to the producer. The producer implements a non-blocking event loop that eliminates waiting for responses and allows processing to proceed on other actions until the response event arrives.
For more complex interactions, other correlation strategies are possible. They can be based on timestamps, event sequencing, business IDs of the entities referenced by the events, and other criteria.
Quick on-demand self-service
Persistent events streams allow consumers to subscribe and unsubscribe dynamically, as needed. So, a new consumer can come on board as required without changes to the producer, other consumers, or any other componentry. Eventing infrastructure components support filtering of events for subscribers that allow consumers to dynamically select events of interest. Such a feature also reduces the communication and processing needs in the cloud, as the rules are evaluated before the events are even sent to potential consumers.
Cross-cluster data exchange
Cross-cluster data exchange is useful to replicate data to other clusters for local access or backup, throughput improvements, or load balancing. Routers based on the Advanced Message Queuing Protocol (AMQP) forward events to AMQP-enabled endpoints. Event brokers are often configured as clusters to scale up to provide local consumers and producers pub-sub capabilities.
Conclusion
The decoupled and asynchronous nature of an event-driven architecture enables the development of flexible, extensible, modern cloud-based serverless applications, with new patterns not typically found in REST API-based microservices applications.