Real-time Analytics News for the Week Ending October 19

PinIt

In this week’s real-time analytics news: NVIDIA contributes more technology and expertise to the Open Compute Project.

Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox, sign up here!

NVIDIA announced that it has contributed foundational elements of its NVIDIA Blackwell accelerated computing platform design to the Open Compute Project (OCP) and broadened NVIDIA Spectrum-X support for OCP standards.

Specifically, NVIDIA will be sharing key portions of the NVIDIA GB200 NVL72 system electro-mechanical design with the OCP community — including the rack architecture, compute and switch tray mechanicals, liquid-cooling and thermal environment specifications, and NVIDIA NVLink cable cartridge volumetrics — to support higher compute density and networking bandwidth.

BMC unveiled new AI-driven product innovations to support mainframe transformation, enterprise-wide data management needs, and agentic AI. Featured products included:

  • BMC Helix GPT, which incorporates agentic AI to give companies a way to improve the quality-of-service interactions and improve the overall operator experience
  • BMC Helix Control-M and Control-M from BMC, which provide a single pane of glass for SaaS and on-premises orchestration and Data Assurance (currently in beta) to catch data problems early, before they affect AI models and applications downstream
  • BMC Helix Edge, which uses AI to simplify complex data collection and analytics anywhere in the network that requires inventory and lifecycle management by monitoring physical assets with digital twins
  • BMC AMI, which is a gen AI-powered assistant to make every developer a mainframe developer when integrated with the BMC AMI DevX Code Insights solution.

Other real-time analytics news in brief

Camunda announced new “out-of-the-box” automation capabilities to help organizations save time and money by removing automation silos. The addition of Camunda RPA (Robotic Process Automation) and Camunda IDP (Intelligent Document Processing) alongside new AI features makes it easier for organizations to build and scale automations, powered by best-in-class process orchestration. 

Cognizant announced enhancements to its Cognizant Neuro AI platform, aimed at enabling enterprises to rapidly discover, prototype, and develop AI use cases. The enhanced Cognizant Neuro AI platform can be leveraged for almost any industry or business challenge involving data analysis, from inventory management and dynamic pricing to fraud reduction and efficient staff allocation. The enhancements to the Neuro AI platform began as research projects at the Cognizant AI Research Lab.

Cube announced enhancements to Cube Cloud, Cube’s universal semantic layer, that improve how data is managed and consumed. New capabilities include a next-generation data modeling engine, code-named Tesseract; Data Access Policies; Cube Copilot; Cube Visual Modeler; and the general availability of Semantic Catalog. In addition, the company announced Cube OSS 1.0, marking a significant milestone.

Databricks announced a strategic collaboration agreement (SCA) with Amazon Web Services (AWS) to accelerate the development of custom models built with Databricks Mosaic AI on AWS. Databricks will leverage AWS Trainium chips as the preferred AI chip to power Mosaic AI model training and serving capabilities on AWS. Joint customers can leverage Mosaic AI to pretrain, fine-tune, augment, and serve large language models (LLMs) on their private data, backed by the scale, performance, and security of AWS.

DataStax announced the DataStax AI Platform, built with NVIDIA AI, which reduces AI development time. This platform integrates the DataStax AI platform with NVIDIA AI Enterprise software, making it easier for enterprises to build AI applications that leverage companies’ enterprise data and context. This platform also makes it easier for enterprises to hone their models so they self-learn and get more accurate with customer use.

EyePop.ai unveiled its self-service AI platform, empowering businesses to build custom computer vision models using their own data or leverage curated models from EyePop.ai’s library. The platform acts as a virtual machine learning engineer, simplifying the creation of AI-driven applications with cutting-edge computer vision technology.

H2O.ai announced H2OVL Mississippi 2B and 0.8B, two new multimodal foundation models designed specifically for OCR and Document AI use cases. Compact yet highly efficient, the H2OVL Mississippi foundation models deliver enhanced performance for vision and OCR tasks in enterprise environments. Available now on Hugging Face, H2OVL Mississippi 2B and 0.8B offer enterprises an economical solution with efficiency and accuracy for real-time document analysis and image recognition.

Honeycomb announced the launch of two groundbreaking products: Honeycomb Telemetry Pipeline and Honeycomb for Log Analytics. These updates empower organizations to transform how they understand their software systems and bridges the gap between traditional monitoring and cutting-edge observability practices. Teams can develop greater effectiveness, proactivity, and resilience in managing complex systems.

Intel announced expanded support for Intel GPUs in PyTorch 2.5, which was recently released. GPUs supported include Intel Arc discrete graphics, Intel Core Ultra processors with built-in Intel Arc graphics, and Intel Data Center GPU Max Series. New features help promote accelerated machine learning workflows within the PyTorch ecosystem and provide a consistent developer experience and support. Developers seeking to fine-tune, inference, and experiment with PyTorch models on Intel Core Ultra AI PCs will now be able to directly install PyTorch with preview and nightly binary releases for Windows, Linux, and Windows Subsystem for Linux 2.

Lenovo unveiled Lenovo Hybrid AI Advantage with NVIDIA. The solution combines Lenovo’s full-stack capabilities and Lenovo AI Library with NVIDIA AI software, accelerated computing, and networking. The solution aims to empower organizations by allowing them to turn data and intelligence into business outcomes faster and more efficiently, accelerating AI adoption and delivering greater return on investment (ROI).

Nebius announced the launch of a cloud computing platform built from scratch specifically for the age of AI. The new Nebius platform is designed to manage the full machine learning (ML) lifecycle – from data processing and training through to fine-tuning and inference – all in one place. Built using the NVIDIA accelerated computing platform, the result is an AI-native cloud computing environment that supports highly intensive and distributed AI and ML workloads.

Predibase unveiled the Predibase Inference Engine, its new solution engineered to deploy fine-tuned small language models (SLMs) swiftly and efficiently across both private serverless (SaaS) and virtual private cloud (VPC) environments. The Predibase Inference Engine, powered by LoRA eXchange (LoRAX – 2.1k stars on GitHub), Turbo LoRA, and seamless GPU autoscaling, serves fine-tuned SLMs faster than traditional methods and confidently handles enterprise workloads of hundreds of requests per second.

Rocket Software announced it is expanding its Hybrid Cloud solutions to include generative AI (GenAI) functionality. The enhancements harness GenAI and automation to streamline the modernization of the business applications and data upon which businesses run. The goal of the new capabilities is to improve organizational agility and decision-making by unlocking the value of these applications and data, bridging them into hybrid cloud strategies.

StreamNative announced a managed Apache Flink BYOC product offering will be available to StreamNative customers in private preview. The offering is powered by Ververica, the original creator of Apache Flink. StreamNative’s newly managed service will first be available in the Bring Your Own Cloud (BYOC) deployment. Integrating Flink’s advanced stream processing capabilities with StreamNative’s robust streaming data storage layer all in the same Virtual Private Cloud (VPC) to ensure data sovereignty across the entire data processing lifecycle. The new offering will soon be available in the other StreamNative Cloud offerings, including Serverless, Dedicated, and private cloud.

SUSE announced the new cloud-native edge computing capabilities as part of the general availability of SUSE Edge 3.1. The solution includes new features that enable enterprises to improve operational efficiency and deploy innovations to the edge faster. It also provides a flexible and secure cloud platform that supports the fully automated deployment and lifecycle management of tens of thousands of edge devices.

Vultr announced an expansion to its Vultr Serverless Inference platform, providing organizations with the essential infrastructure needed for agentic AI. The new capabilities empower businesses to autoscale models and leverage Turnkey Retrieval-Augmented Generation (RAG), to deliver model inference across Vultr’s 32 global cloud data center locations.

If your company has real-time analytics news, send your announcements to [email protected].

In case you missed it, here are our most recent previous weekly real-time analytics news roundups:

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *