With StreamSets, data engineers can design reusable intelligent mainframe data pipelines that transform the data in motion while identifying and adapting to Data Drift.
Banking, insurance, and healthcare, still rely heavily on mainframe systems. These industries are among the most competitive. Beyond vying for customers with their peers, these entrenched players are under pressure from new digital-only companies.
These new entrants typically use the latest technologies to innovate, reduce costs, and personalize the user experience. They come in many varieties in each market and include FinTechs, InsurTechs, and HealthTechs, in their respective markets. Foremost, these businesses are data organizations first, that happen to provide services in their fields.
Additionally, the data-centric nature of modern business has opened up these markets, long dominated by institutions that have been around 100 years or more, to new non-traditional players. For example, last year, retail giants Walgreens and Walmart announced new banking initiatives. Regardless of the type of business, the challengers to the entrenched businesses tend to make greater use of data, analytics, and modern application development techniques.
To address this competition and lead with innovative products and services, businesses in these industries must make mainframe data more easily available and in a secure manner. Specifically, they must provide easy access to the data for analysts. It must be provided via intuitive formatting so the analysts can easily understand and work with it. Yet, the data must still be governed under existing security and governance frameworks so that data is protected and only available to those who have the correct permissions.
See also: Activate your Mainframe Data for Cloud Analytics
If these businesses can make strategic use of their mainframe data, they will be able to service their traditional customers and business while branching out into new areas.
Why the data is needed, and options to make it available
The newer entrants in banking, insurance, healthcare, and other markets leverage datasets and analytics to make real-time decisions, highly personalized offerings, expand their offerings to new customers, and more. A few examples put the types of desired capabilities into perspective.
The mainframe data might be used to offer real-time onboarding to a new insurance customer. Or something more complex like providing a dynamically-adjusted loan rate based on a customer’s creditworthiness determined using multiple traditional financial metrics (e.g., FICO score) and newer methods such as taking rent and utility payments made from a bank account. The data on mainframe systems might also be used to escape what some call the tyranny of averages and make highly personalized offerings for individual customers.
To enable these and other offerings requires that a business leverage technologies like AI and analytics on the mainframe data. However, to do that, the data must be accessible to today’s modern data pipelines that feed these efforts.
There are various ways businesses can make their mainframe data available. Common methods used in the past include FTP extracts, writing code, and developing point-to-point integrations.
Such efforts were often done as one-off exercises that consumed a great amount of staff time and resources. In the days when requests for such access came in once a year, month, week, or even day, the effort was part of normal operations and the cost of doing business.
These approaches break down in modern environments where there is a need for fast and frequent access and where the data must be incorporated into expansive data pipelines and workflows.
An alternative approach for unlocking mainframe data value
Businesses need to unlock the data on their mainframes to unlock that data’s value. In particular, there is great value to be gained if the data can be made accessible to data consumers in the lines of business, data analysts, and data scientists who rely on cloud analytics platforms for their analytics and reporting efforts.
One tool that can help is an offering from StreamSets, a Software AG company. The tool is the StreamSets Mainframe Collector, which provides connectivity to mainframe and legacy-oriented systems, formatting, and presentation of the data, and movement of data to cloud data & analytics platforms. The mainframe data security rules can be easily adopted and extended through the presentation layer for data governance and protection. It was originally developed by CONNX Solutions, a company that was acquired by Software AG in 2016.
StreamSets Mainframe Collector offers data access, data presentation in relational or virtualized views that are easily searchable via SQL, and data movement to cloud platforms. With StreamSets, data engineers can design reusable intelligent data pipelines that transform the data in motion while identifying and adapting to Data Drift (changes in data, source systems, formats, etc.). That ensures data is analytics-ready when it arrives at the destination, typically either from or including mainframe environments), and data movement (classical extract, transform, and load [ETL], as well as extract, load, and transform [ELT], and change data capture).
This can help provide real-time unified access to the data. That, in turn, means businesses can use that data for analytics and reporting and leverage that data in innovative ways with no risk to underlying systems.
Bottom line: StreamSets Mainframe Collector offers the ease of use, as well as the data sharing and data governance capabilities needed for mainframe-reliant businesses to be more responsive and innovative to compete with their peer organizations and newer startups encroaching on their markets.
Want to learn more? Visit StreamSets’ blog and read, “Mainframe Data Is Critical for Cloud Analytics Success—But Getting to It Isn’t Easy.“