Real-time Analytics News for the Week Ending March 22

PinIt

In this week’s real-time analytics news: The NVIDIA GTC AI Conference delivered an abundance of announcements related to accelerating AI workloads.

Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox, sign up here!

NVIDIA used this week’s GTC AI Conference to make a number of announcements to accelerate AI workloads. The announcements included:

  • The NVIDIA AI Data Platform, a customizable reference design to build a new class of AI infrastructure for demanding AI inference workloads. It includes enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking, and software. Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents.
  • NVIDIA DGX SuperPOD, an advanced enterprise AI infrastructure, built with NVIDIA Blackwell Ultra GPUs. Enterprises can use new NVIDIA DGX GB300 and NVIDIA DGX B300 systems, integrated with NVIDIA networking, to deliver out-of-the-box DGX SuperPOD AI supercomputers that offer FP4 precision and faster AI reasoning to supercharge token generation for AI applications.
  • NVIDIA Spectrum-X and NVIDIA Quantum-X silicon photonics networking switches, which enable AI factories to connect millions of GPUs across sites while drastically reducing energy consumption and operational costs. NVIDIA has achieved the fusion of electronic circuits and optical communications at massive scale.
  • NVIDIA DGX personal AI supercomputers powered by the NVIDIA Grace Blackwell platform. DGX Spark — formerly Project DIGITS — and DGX Station, a new high-performance NVIDIA Grace Blackwell desktop supercomputer powered by the NVIDIA Blackwell Ultra platform, enable AI developers and data scientists to prototype, fine-tune, and inference large models on desktops. Users can run these models locally or deploy them on NVIDIA DGX Cloud or any other accelerated cloud or data center infrastructure.

The company also unveiled Project Aether, a collection of tools and processes that automatically qualify, test, configure, and optimize Spark workloads for GPU acceleration at scale.

In addition to these announcements by the company, many partners also used the conference to break news. Some of these top news items include:

Oracle and NVIDIA announced an integration between NVIDIA accelerated computing and inference software with Oracle’s AI infrastructure and generative AI services to help organizations globally speed the creation of agentic AI applications. The new integration between Oracle Cloud Infrastructure (OCI) and the NVIDIA AI Enterprise software platform will make 160+ AI tools and 100+ NVIDIA NIM microservices natively available through the OCI Console. In addition, the two companies are collaborating on the no-code deployment of both Oracle and NVIDIA AI Blueprints and on accelerating AI vector search in Oracle Database 23ai with the NVIDIA cuVS library.

In other Oracle news, the company announced that NVIDIA AI Enterprise will be available on Oracle Cloud Infrastructure (OCI).

Anaconda announced it is expanding its partnership with Nvidia to help enterprises and developers evolve their AI capabilities safely and responsibly. The work includes further accelerating data and AI processing by distributing CUDA libraries as part of the Anaconda Platform for enterprises and improving accessibility to GPUs in Jupyter Notebooks, which are available first as a private preview in Anaconda Notebooks.

Credo, working with XConn Technologies, announced a public demonstration of multi-vendor PCI Express (PCIe) 5.0 interoperability featuring the two companies’ technologies. The demonstration features the Credo Toucan-based OSFP-XD PCIe Active Electrical Cables (AECs) with an XConn 256-lane PCIe 5.0 Apollo switch in an advanced AI cluster running a Hugging Face Llama LLM (Large Language Model) inference workload. Credo AECs will connect ten NVIDIA H100 GPUs to the server through the XConn switch.

Cirrascale Cloud Services announced the early preview of its Cirrascale Inference Platform—an enterprise inference-as-a-service solution. The platform will launch with the NVIDIA Blackwell architecture, featuring the NVIDIA HGX B200 and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

DataRobot announced the general availability of expanded integrations with NVIDIA AI Enterprise to accelerate production-ready agentic AI applications. Cloud customers can now use DataRobot fully integrated and pre-installed with NVIDIA AI Enterprise, complete with a new gallery of NVIDIA NIM and NVIDIA NeMo framework, including the new NVIDIA Llama Nemotron Reasoning models, accelerating AI development and delivery.

DataStax announced Astra DB Hybrid Search, a new capability that significantly enhances retrieval-augmented generation (RAG) systems by improving search relevance. Accelerated by the NVIDIA NeMo Retriever reranking microservices, part of NVIDIA AI Enterprise, Astra DB Hybrid Search seamlessly integrates vector search and lexical search.

DDN announced it is integrating the NVIDIA AI Data Platform reference design with DDN EXAScaler and DDN Infinia 2.0—part of the DDN AI Data Intelligence Platform — to power a new wave of agentic AI applications in the enterprise.

Dell Technologies, in collaboration with NVIDIA, announced new AI PCs, infrastructure, software, and services advancements to accelerate enterprise AI innovation at any scale. For example, the company expanded the Dell Pro Max high-performance AI PC portfolio to meet the needs of today’s AI developers, power users, and specialty users. Additionally, some new Dell PowerEdge Servers will support the NVIDIA Blackwell Ultra platform, and others will be available with the Dell AI Factory with NVIDIA.

Galileo announced an integration with NVIDIA NeMo, enabling customers to continuously improve their custom generative AI models. This allows customers to evaluate models comprehensively across the development lifecycle, curating feedback into datasets that power additional fine-tuning.

Hewlett Packard Enterprise (HPE) announced new enterprise AI solutions with NVIDIA from NVIDIA AI Computing by HPE that accelerates the time to value for customers deploying generative, agentic, and physical AI.

Hitachi Vantara announced the Hitachi iQ M Series. Integrating accelerated computing platforms with robust networking, the Hitachi iQ M Series combines Hitachi Vantara Virtual Storage Platform One (VSP One) storage, integrated file system choices, and optional NVIDIA AI Enterprise software into a scalable and adaptable AI infrastructure solution.

In other Hitachi Vantara news, the company entered into a strategic resell agreement with Hammerspace. Through the agreement, Hitachi Vantara has integrated Hammerspace software with the VSP One storage platform, expanding Hitachi iQ’s capabilities to address different data management requirements for dataset creation, processing, governance, and protection.

IBM announced new collaborations with NVIDIA, including planned new integrations based on the NVIDIA AI Data Platform reference design to help enterprises more effectively put their data to work to help build, scale and manage generative AI workloads and agentic AI applications.

Infleqtion unveiled Contextual Machine Learning (CML), an AI approach that allows machine learning models to process information over longer time periods and from multiple sources simultaneously. This enhances AI’s ability to recognize patterns in sensor data, predict trends, and make real-time decisions with greater accuracy. CML is implemented on NVIDIA A100 GPUs using the NVIDIA CUDA-Q platform.

Lenovo unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption and boost business productivity by fast-tracking agentic AI. The validated, full-stack AI solutions enable enterprises to quickly build and deploy AI agents for a broad range of high-demand use cases, increasing productivity, agility, and trust.

MinIO unveiled three upcoming advancements to MinIO AIStor that deepen its support for the NVIDIA AI ecosystem. The new integrations will help users maximize the utilization and efficiency of their AI infrastructures while streamlining their management, freeing up personnel for more strategic AI activities.

NetApp announced intelligent data infrastructure for agentic AI that taps the NVIDIA AI Data Platform reference design. By collaborating with NVIDIA, NetApp is enabling businesses to better leverage their data to fuel AI reasoning inference.

Nebius announced that it will be an early adopter AI cloud provider to offer the new NVIDIA Blackwell Ultra AI factory platform – unlocking the world’s most advanced compute on demand for AI builders and enterprises everywhere to build the next generation of agentic, reasoning, and physical AI.

Pure Storage announced it is integrating the NVIDIA AI Data Platform reference design into its FlashBlade platform, expanding its commitment to deliver validated, enterprise-grade scalable, AI-ready solutions for customers that meet NVIDIA’s rigorous standards.

Supermicro announced support for the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on a range of workload-optimized GPU servers and workstations. Specifically optimized for the NVIDIA Blackwell generation of PCIe GPUs, the broad range of Supermicro servers will enable more enterprises to leverage accelerated computing for LLM inference and fine-tuning agentic AI.

In other Supermicro news, the company announced a new optimized storage server for high-performance software-defined storage workloads found in AI and ML training, inferencing, and analytics. The company worked closely with NVIDIA and WEKA to enable customers to create power-efficient, high-performance storage systems for their AI factories.

Together AI unveiled new advancements, including the deployment of NVIDIA Blackwell GPUs at scale and the preview release of Together Instant GPU Clusters, which offer up to 64 NVIDIA Hopper GPUs (80GB SXM) per deployment. 

VAST Data announced the availability of VAST InsightEngine, starting with NVIDIA DGX systems and expanding to NVIDIA-Certified Systems from leading server providers.

Vultr announced it is an early adopter in enabling early access to the NVIDIA HGX B200. Vultr Cloud GPU, accelerated by NVIDIA HGX B200, will provide training and inference support for enterprises looking to scale AI-native applications via Vultr’s cloud data center regions worldwide.

WEKA announced it is integrating with the NVIDIA AI Data Platform reference design and has achieved NVIDIA storage certifications to provide optimized AI infrastructure for the future of agentic AI and reasoning models.

Revisit coverage of last year’s NVIDIA GTC Conference.

Real-time analytics news in brief

Airbyte announced new capabilities for moving data at scale for artificial intelligence (AI) and analytics workloads while ensuring governance so that organizations spend less time managing data pipelines while unlocking value from data. The new offerings and enhancements include:

  • Support for the Iceberg open standard for moving data into modern lakehouse architectures.
  • File transfer support for Google Drive, SharePoint, and OneDrive for the movement of unstructured data.
  • An Enterprise Connector Bundle that includes connectors for NetSuite, Oracle database with Change Data Capture (CDC), SAP HANA, ServiceNow, and Workday.
  • Support for OAuth 2.0 assures secure authentication while simplifying integrations by reducing manual work.
  • Support for OpenTelemetry (OTEL) improves pipeline observability and monitoring with metrics for visibility into sync performance, API activity, and data volume movement.
  • An updated Python Connector Developer Kit (CDK) that enables faster connector development.

Amplitude announced the rollout of Session Replay Everywhere. Session Replay Everywhere uses AI-powered insights to help surface what matters most without requiring customers to sift through data to help teams move faster while keeping customer data secure. With the solution, replays are embedded right inside a user’s analytics, experiments, and surveys. It offers instant insights with AI-powered summaries and recommendations.

Confluent announced significant advancements in Tableflow, the easiest way to access operational data from data lakes and warehouses. With Tableflow, all streaming data in Confluent Cloud can be accessed in popular open table formats, enabling advanced analytics, real-time artificial intelligence (AI), and next-generation applications. Support for Apache Iceberg is now generally available (GA). As a result of an expanded partnership with Databricks, a new early access program for Delta Lake is now open. Additionally, Tableflow now offers enhanced data storage flexibility and seamless integrations with leading catalog providers, including AWS Glue Data Catalog and Snowflake’s managed service for Apache Polaris and Snowflake Open Catalog.

dbt Labs announced the general availability of its AI-powered data assistant, dbt Copilot, along with several new product features. The launch includes several new, enterprise-ready enhancements, including OpenAI Bring Your Own Key (BYOK), Azure OpenAI service, and a custom style guide for standardized SQL formatting (in beta).

DDN unveiled xFusionAI, a new AI infrastructure that merges best-in-class training and inference performance into a single, highly optimized platform. xFusionAI combines the power of DDN’s EXAScaler parallel file system with Infinia’s AI-native scalability, elasticity, and performance, creating a hybrid platform that reshapes AI workflows for enterprises, hyperscalers, and research institutions.

HUMAN Security announced today HUMAN Sightline, an innovative suite of capabilities that detects, isolates, and tracks individual bot profiles. The solution enables security teams to conduct faster investigations and optimize their response to evolving threats in the era of AI. This fundamentally transforms bot management by delivering insights into automated traffic.

Kore.ai announced the Kore.ai Agent Platform, an enterprise-grade multi-agent orchestration infrastructure for developing, deploying, and managing sophisticated agentic applications at scale. The platform’s Search and Data AI provides critical enterprise and user context for any AI agent that needs to operate effectively in real-world business environments, integrating 100+ pre-built connectors for structured and unstructured data.

Precisely announced significant advancements to its Automate SAP Data API, designed to simplify complex SAP ERP integrations and accelerate digital transformation efforts for enterprise organizations. The solution helps companies address these challenges by leveraging Precisely’s easy-to-use no-code/low-code platforms for SAP process automation – Automate Studio and Automate Evolve.

Partnerships, collaborations, and more

Alluxio announced a strategic collaboration with the vLLM Production Stack, an open-source implementation of a cluster-wide full-stack vLLM serving system developed by LMCache Lab at the University of Chicago. This partnership aims to advance the next-generation AI infrastructure for large language model (LLM) inference.

C3 AI announced a strategic alliance with PwC to deploy AI-powered business transformation at enterprise scale across critical industries. The alliance combines C3 AI’s Enterprise AI application software solutions with PwC’s deep domain expertise and advisory services in change management and organizational transformation.

Dataiku announced a new analytics modernization program with Deloitte aimed at helping enterprises streamline data operations, reduce technical debt, and enhance AI-driven capabilities and benefits across their organizations. Deloitte will employ its depth of AI and modernization talent and capabilities to migrate enterprise customers from legacy data systems to the Dataiku Universal AI Platform to build intelligence into their daily operations through modern analytics, Generative AI (GenAI), and AI agents.

Reltio introduced Reltio Integration with Alation. The new integration eliminates the need for organizations to build one-off, costly, time-consuming custom integrations that stitch together several systems related to data governance. By automatically synchronizing Alation with Reltio Data Cloud’s entity, relationship, and attribute metadata, companies can now accelerate the implementation of their best-of-breed strategies and simplify data governance.

StreamNative announced the General Availability (GA) of Ursa Engine for Bring Your Own Cloud (BYOC) on Amazon Web Services (AWS), which natively integrates with Snowflake Open Catalog, Databricks Unity Catalog, and Amazon S3 Tables to seamlessly stream real-time data into AI-ready data lakehouses. Along with Ursa Engine GA, StreamNative announces UniLink (Universal Linking) public preview to help customers migrate from legacy data streaming platforms to Ursa.

If your company has real-time analytics news, send your announcements to ssalamone@rtinsights.com.

In case you missed it, here are our most recent previous weekly real-time analytics news roundups:

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *