This week’s Red Hat Summit focused squarely on bringing open source and AI together with the introduction of critical solutions to help make the deployment and management of cloud and AI projects faster and easier.
Red Hat is focusing on the intersection of open source and AI. That was the message from Red Hat President and CEO Matt Hicks and others on the opening day of this week’s Red Hat Summit.
In his opening keynote, Hicks noted the importance of open source in driving AI innovation. “Red Hat believes that open source is the best model for AI and encourages contributions from a broad pool of contributors,” said Hicks.
To complement such work, Hicks announced the open-sourcing of Instruct Lab, a technology that allows anyone to contribute to and train large language models. He also announced the open-sourcing of the Granite family of language and code models in partnership with IBM Research.
During his keynote talk, Hicks brought in Ashesh Badani, Red Hat’s Senior Vice President and Chief Product Officer. Bandani introduced Red Hat Enterprise Linux (RHEL) AI. He noted that RHEL AI is an easy starting point for building AI applications with support for the Granite models and Instruct Lab. He emphasized that the solution gives organizations choice, openness, and control of their AI strategy.
A proof point about the choice and openness of the solution is the varied partners that are lined up to make use of RHEL AI. During the keynote, highlighted partners included Dell, Intel, and NVIDIA.
See also: Red Hat Summit Report: AI Takes Center Stage
A closer look at the solution
The major announcements in the Summit keynote are consistent with Red Hat’s open-source legacy.
Red Hat Enterprise Linux AI (RHEL AI) brings together the open source-licensed Granite large language model (LLM) family from IBM Research, InstructLab model alignment tools based on the LAB (Large-scale Alignment for chatBots) methodology, and a community-driven approach to model development through the InstructLab project.
The solution is packaged as an optimized, bootable RHEL image for deployments across hybrid cloud infrastructure. The solution is also included as part of OpenShift AI, Red Hat’s hybrid machine learning operations (MLOps) platform, for running models and InstructLab at scale across distributed cluster environments.
Simply put, RHEL AI is a foundation model platform that enables users to develop, test, and deploy generative AI (GenAI) models more seamlessly. To that point, Badani said: “RHEL AI and the InstructLab project, coupled with Red Hat OpenShift AI at scale, are designed to lower many of the barriers facing GenAI across the hybrid cloud, from limited data science skills to the sheer resources required, while fueling innovation both in enterprise deployments and in upstream communities.”
Bringing automation into the fold
Red Hat, like many other companies, has long known the importance of automation. Since its acquisition of Ansible in 2015, the company has routinely incorporated automation, and specifically, features of the Red Hat Ansible Automation Platform, into its offerings.
At the Summit, it announced the expansion of Red Hat Lightspeed across its platforms, “infusing enterprise-ready AI across the Red Hat hybrid cloud portfolio,” according to the company. (Red Hat Lightspeed with IBM watsonx Code Assistant was first introduced in the Red Hat Ansible Automation Platform as a solution to address hybrid cloud complexities and help overcome industry-wide skills gaps.)
In this week’s announcement, Red Hat OpenShift Lightspeed and Red Hat Enterprise Linux Lightspeed will offer natural language processing capabilities designed to make Red Hat’s enterprise-grade Linux and cloud-native application platforms easier to use for users of any skill level. It does this through an integration with generative AI technology.
Red Hat described two use cases for the technology. One example is to use OpenShift Lightspeed with GenAI to help users with different skill sets deploy traditional and cloud-native applications on OpenShift clusters. Another example is to use OpenShift Lightspeed when a cluster is at capacity. The solution will suggest to the user that autoscaling should be enabled and, after assessing that the clusters are hosted on a public cloud, suggest a new instance of the appropriate size.
A final word
The wide scale embracement of AI will only translate into benefits if organizations have the right underlying infrastructure to support it, as well as the ability to ramp up capacity for AI quickly to meet changing market demands.
What is needed is a robust hybrid cloud infrastructure that combines the best of AI and the cloud. This week’s Red Hat Summit focused squarely on this topic and saw the introduction of critical solutions to help make the deployment and management of cloud and AI projects faster and easier.