Open Nav

Optimize machine learning through MLOps with Dell Technologies and cnvrg.io

Dell Technologies has worked closely with cnvrg.io and other partners to deliver a tested and validated solution for MLOps

Enterprises have an increasing demand for rapid insights from data to keep up with changes in the business and in the market. But it takes about nine months on average to move models from the lab into production. To reduce time to value, data scientists are becoming increasingly proficient in a broader skill set, including basic software engineering skills like code modularization, reuse, testing and versioning, as well as some infrastructure-oriented skills like performance testing and tuning.  Running isolated experiments across a collection of notebooks is no longer enough.

In response, MLOps has emerged as a set of practices that borrows from machine learning, DevOps and data engineering. It was developed to automate model production lifecycles reliably and efficiently.  MLOps software platforms create a centralized hub in the enterprise for:

  • Managing machines, compute, and storage
  • Distributing machine learning jobs 
  • Storing, versioning, and serving models 
  • Managing load balancers, pods and containers 
  • Scheduling and place jobs across different environments 
  • Monitoring job and cluster performance 

MLOps platforms combine the operational benefits found in point solutions for model and data versioning, code versioning, and deployment orchestration, along with the capabilities of software project repositories.  This means that model code, data, features, and artifacts can all be managed like other enterprise software development projects.  With MLOps, companies can bridge the gap between the experimental nature of data science and the operational nature of software deployment.

Dell Technologies, cnvrg.io, and other partners have collaborated to deliver an MLOps solution, jointly engineered and tested, to help organizations capitalize on the benefits of ML and AI. The software portion of the joint solution includes:

  • cnvrg.io as the MLOps platform
  • Dell AI enabling infrastructure and optional edge-computing 
  • Applications, frameworks, and tools that researchers, data scientists, and developers can use to build ML models and analyze data

cnvrg.io provides an enterprise-ready MLOps platform for data scientists and ML engineers to develop and publish AI applications. It’s a Kubernetes-based deployment that includes container images with pre-configured templates. 

This solution is validated to run on VMware vSphere with Tanzu, a virtualization platform that enables an enterprise to manage clusters of both on-demand Kubernetes containers alongside traditional virtual machines, providing complete life cycle management of those compute and storage resources.

The hardware portion of the Dell solution can run on either Dell PowerEdge Servers or VxRail hyperconverged infrastructure (HCI) for compute. Dell PowerScale provides the analytics performance and concurrency at scale that is critical to consistently feeding the most data-hungry ML and AI algorithms.

Accelerate your AI journey today with cnvrg.io and Dell.   Download the white paper to learn more!  Ready to begin?  Contact us here to set up a demonstration.

Top MLOps guides and news in your inbox every month