ARtSENSE, an innovative project underway in Europe illustrates how Complex Event Processing (CEP) and user behavior monitoring can create customized, adaptive media displays on the fly. Here, RTInsights expert Dr. Nenad Stojanovic explains.
Consumers today must cope with enormous quantities of information on TV, on the Web, and in their e-mail and physical mailboxes. To gain their attention, businesses and other organizations that produce this information must make the content unique and relevant to each consumer’s needs. This means they must identify the user’s interests and context such as location, demographics and client device and deliver the content in a non-intrusive way.
Adaptive Augmented Reality (A2R) is one technology that is enabling organizations to do this. Augmented reality systems merge computer-created content such as images, explanatory text, maps, voice or video with a user’s real-life experiences.
Google Glass, a sort of smartphone in eyeglasses, is one of the most famous examples of such approaches for the proactive information delivery; it allows the wearer to read text, take pictures, get directions and do other tasks by using voice commands, all the while viewing the results through the eyeglasses.
ARtSENSE Aims to Create Personalized Museum Tours
A lesser known but leading-edge project in A2R is underway in ARtSENSE, a EU-funded research project. The goal of ARtSENSE is to develop a new generation of A2R devices that will employ event processing to produce more personalized museum experiences. The research involves researchers, museum professionals and artists from three art institutions: the National Museum of Decorative Arts in Madrid, the Muse des Arts et M tiers in Paris and the Foundation for Art and Creative Technology in Liverpool. It is led by FZI Karlsuhe, a non-profit contract research organization that focuses on Information Technology (IT).
The ARtSENSE project employs A2R, a novel approach to augmented reality (AR) in which content is adapted in real time based on the user’s individual behaviors and interest levels. ARtSENSE aims to create a new generation of AR guides for museum visitors. Indeed, the changing demands of museum visitors and the quest for an even more personalized museum experience have created the need for a new type of AR technology to provide this personalization.
Through a combination of three different types of sensors visual sensors, acoustic sensors and bio-sensors A2R identifies and supports both the visual and the affective experiences of visitors. The goal is to structure and shape the museum experience according to each visitor’s interests and needs.
One of the most exciting components of the system is its hardware; a pair of glasses that enable people to command and interact with the glasses’ software through eye movements and hand gestures. Another interesting feature is the use of sensors that the visitor wears which detect the user’s interest levels by measuring his or her bio-signals (i.e, skin conductance level and heartbeat rate). This information is fed into a short-term profile.
The project combines not just one but three types of sensors: visual, audio and psycho-physiological (or bio-sensory). The goal is to estimate the interest and engagement of the visitor and adapt the content provided by the multimedia tour guide accordingly. In order to reach this objective, both the visitor and the museum environment have to be monitored for such things as the direction of the visitor’s gaze or gesture, heartbeat rate and the acoustic environment. By combining all of this sensory input, the multimedia AR software adapts the content to the visitor’s interest level.
The Role of Event Processing
The basis of ARtSENSE is the aforementioned CEP, which combines data from several sources and infers multiple possible outcomes, enabling efficient, real-time integration of the different sensors by translating and publishing their signals into well-structured events that can be consumed by other components, and pre and post-processed by the CEP engine. The CEP detects visitor’s interests based on predefined patterns and real-time sensor data (i.e., a specific combination of the audio, visual and bio-sensing data).
For instance, it might detect that a visitor has looked at a piece of artwork for more than three or four seconds and that the noise level in the area is low. At the same time, the bio-senor detects a higher pulse rate which indicates an increase in the person’s interest. Then, event processing technology makes it possible to trigger the appropriate change in the tour such as providing additional information on the artwork of greater interest.
ARtSENSE is paving the way for new types of CEP applications related more to the processing of personal data. With the explosion of smartphones and other mobile devices, and the rise in data that those devices will transmit and store (about 40,000 exabytes by 2019), there is a huge opportunity for event processing in mobile applications.
One very promising area is in remote monitoring of people’s health status or activity levels by using wearable sensors to support automatic alarming. A patient’s monitor might detect a variation in his or her heart rate or blood pressure, or a driver monitor might send an alert if it detects that at driver is falling asleep.
Event processing technologies will vastly improve our ability to filter relevant data and personalize a wide
range of experiences as well as enable people to automatically monitor things that are most important to them. All without taking any time away from other tasks. Event processing and adaptive technologies may not reduce the information overload for consumers and businesses but it will provide an intelligent method for mining and managing that information.