Deploying ML Workloads Leveraging NVIDIA
Multi-lnstance CPU with OpenShift and Intel® Tiber™ AI Studio
GPUs are the powerhouses of machine learning and deep learning workloads. And now, with the release of the NVIDIA A100 Tensor Core GPU and NVIDIA DGX A100 system, GPU performance and capabilities for AI computing have skyrocketed, providing unprecedented acceleration at every scale and enabling data scientists to focus on innovation.
In this webinar, we will discuss the benefits of a modern AI infrastructure, with DGX A100 at its core, Red Hat OpenShift providing Kubernetes managed services, paired with Intel® Tiber™ AI Studio’s industry-leading resource management and MLOps capabilities for IT and AI teams. The NVIDIA Multi-Instance GPU (MIG) feature can partition each A100 GPU into as many as seven GPU accelerators for optimal utilization, effectively expanding access to every user and application. The new MIG capability has the power to transform how data scientists work, and opens the possibilities for a significantly wider access to utilization of GPU technology.
We will present how the new MIG integration on Intel® Tiber™ AI Studio delivers accelerated ML workloads of all sizes to AI teams, using OpenShift for advanced orchestration.
You will learn how to:
Leverage NVIDIA MIG technology to expand the performance and value of each NVIDIA A100 Tensor Core GPU
Execute diverse workloads with MIG instances on Kubernetes infrastructure
Share GPUs and MIG instances with Intel® Tiber™ AI Studio meta-scheduler, accomplishing high utilization and data science productivity
Deliver self service attachment of MIG instances to any ML job
Build an ML pipeline with multiple jobs with right-sized GPU and guaranteed quality of service (QoS) for every job
GPUs are the powerhouses of machine learning and deep learning workloads. And now, with the release of the NVIDIA A100 Tensor Core GPU and NVIDIA DGX A100 system, GPU performance and capabilities for AI computing have skyrocketed, providing unprecedented acceleration at every scale and enabling data scientists to focus on innovation.
In this webinar, we will discuss the benefits of a modern AI infrastructure, with DGX A100 at its core, Red Hat OpenShift providing Kubernetes managed services, paired with Intel® Tiber™ AI Studio’s industry-leading resource management and MLOps capabilities for IT and AI teams. The NVIDIA Multi-Instance GPU (MIG) feature can partition each A100 GPU into as many as seven GPU accelerators for optimal utilization, effectively expanding access to every user and application. The new MIG capability has the power to transform how data scientists work, and opens the possibilities for a significantly wider access to utilization of GPU technology.
We will present how the new MIG integration on Intel® Tiber™ AI Studio delivers accelerated ML workloads of all sizes to AI teams, using OpenShift for advanced orchestration.
You will learn how to:
Leverage NVIDIA MIG technology to expand the performance and value of each NVIDIA A100 Tensor Core GPU
Execute diverse workloads with MIG instances on Kubernetes infrastructure
Share GPUs and MIG instances with Intel® Tiber™ AI Studio meta-scheduler, accomplishing high utilization and data science productivity
Deliver self service attachment of MIG instances to any ML job
Build an ML pipeline with multiple jobs with right-sized GPU and guaranteed quality of service (QoS) for every job