Open Nav

cnvrg.io AI Operating System Teams Up with Supermicro to Deliver End-to-End AI Experience

cnvrg.io AI Operating System Teams Up with Supermicro to Deliver End-to-End AI Experience

cnvrg.io launches solution with Supermicro to increase utilization of 3rd Gen Intel Xeon Scalable processors and next gen GPU systems

cnvrg.io  AI OS, together with Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology announce a solution collaboration that maximizes Supermicro systems for AI workloads. cnvrg.io bundled with Supermicro AI purpose-built servers creates a managed, coordinated, and easy to consume ML infrastructure, with a ready-to-use out-of-box experience. cnvrg.io running on Supermicro servers delivers data scientists and AI practitioners one unified control plane to build, deploy and manage machine learning workloads. With cnvrg.io, data scientists can bring high impact models to production faster and monitor models on top of the Supermicro AI purpose-built solutions.

Machine Learning (ML) has seen extraordinary growth in the last two years. While 70% of enterprise AI practitioners are exploring AI use cases, only 7% are able to deploy and scale across business units. Over the next two years, it’s predicted by Gartner that, through 2023, at least 50% of IT leaders will struggle to move their AI predictive projects past proof of concept to a production level of maturity.  The top barrier enterprises face to scaling AI is the complexity of integrating the solution within existing enterprise applications and infrastructure. cnvrg.io delivers an operating system for machine learning that provides Data Science and IT practitioners a unified control plane that manages diverse ML and DL workloads and enables ML in production at scale.

The new Supermicro X12 GPU systems use dual 3rd Gen Intel Xeon Scalable processors and latest NVIDIA A100 GPUs. The systems deliver the maximum processing acceleration for the most compute-intensive workloads with the highest density available. It is a perfect system for AI/Deep Learning, Scientific Virtualization, and High-Performance Computing platforms. With cnvrg.io native integration to Intel’s OneContainer catalog,- data scientists and engineers can instantly spin up optimized CPU workloads and boost overall performance with a best-in-class user experience. With integration to optimized Intel certified containers, cnvrg.io is able to deliver consistent full resource utilization on Intel compute architecture. The cnvrg.io AI operating system is compute-agnostic, making it complementary to standard off-the-shelf design of Supermicro systems. cnvrg.io delivers seamless operation of diverse workloads in a single software platform. Unlike other platforms, each stage of the pipeline can be attached to a different resource, including the support of multi-cluster and heterogeneous compute pipelines. cnvrg.io will help increase utilization of enterprise infrastructure while Supermicro delivers the highest performance for AI training, inferencing, and deployment workloads.

“As AI driven enterprises scale their infrastructure for machine and deep learning, it can be a challenge to manage and allocate those resources and maximize the potential of the hardware.” Says Yochay Ettun, CEO and Co-founder at cnvrg.io. “The end to end solution built with cnvrg.io software on Supermicro hardware increases utilization of servers and maximizes performance for AI training and inferencing at scale.”

“Supermicro offers the broadest portfolio of compute, GPU, and storage systems and cnvrg.io makes it easy to run and scale AI machine learning software infrastructure with its container-based optimization of available compute resources,” Raju Penumatcha, SVP and Chief Product Officer at Supermicro. “Together, cnvrg.io and Supermicro offer customers fully tested solutions for optimal scalable performance with best out-of-the-box experience for AI/ML deployment.”

cnvrg.io running on Supermicro purpose-built servers, delivers an end-to-end ML solution with both hardware and software optimized to accelerate time to value for AI projects,” says Monica Livingston, AI Sales Director at Intel Corporation. “This solution is prebuilt with software optimizations for Intel XPU’s, enabling customers to quickly drive optimal performance out of their Intel hardware and improve utilization of their servers. We’re happy to collaborate with cnvrg.io and Supermicro to deliver state-of-the-art technology that is helping customers streamline their AI projects, efficiently use their infrastructure and bring their AI models to production faster.”

 

cnvrg.io running on Supermicro Systems significantly reduces the time-to-market value, and maximizes AI workload utilization and visibility. To learn more about how cnvrg.io and Supermicro can improve performance of your AI infrastructure visit: https://cnvrg.io/solutions/supermicro/ or https://www.supermicro.com/solutions/Solution-Brief_cnvrg.io.pdf

About cnvrg.io

cnvrg.io is an AI OS, transforming the way enterprises manage, scale and accelerate AI and data science development from research to production. The code-first platform is built by data scientists, for data scientists and offers unrivaled flexibility to run on-premise or cloud. From advanced MLOps to continual learning, cnvrg.io brings top of the line technology to data science teams so they can spend less time on DevOps and focus on the real magic – algorithms. Since using cnvrg.io, teams across industries have gotten more models to production resulting in increased business value.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  

Top MLOps guides and news in your inbox every month