Train models on versioned data and create GitHub PR with champion model
Load and version data from PostgreSQL. Run through a preprocessing pipeline using Apache Spark on Kubernetes, train with multiple models with hyperparameter optimization for each. Compare metrics, pick top performing model and open a GitHub Pull-Request.
Deploy trained models with Canary Rollout and A/B testing
Automatic ML pipeline that trains models continuously based on new data snapshots, uses Canary Rollout to build a Champion/Challenger mechanism with auto traffic routing, validation and advanced conditions and rollback option for safe deployment
Train multiple models and select the best one to deploy to SageMaker
Build a pipeline that reads data from PostgreSQL, enrich it with external data sources (weather, holidays) and train two models with hyperparameter optimization - select the top model based on custom metrics and deploy it as to production using SageMaker
Manage resources and increase utilization with CPU/GPU dashboards
Connect all your GPUs, CPUs and compute resources to a single, unified environment (with cloud-bursting built-in) - monitor and see in-depth analysis of usage, utilization and consumption of compute in your ML workloads
Track & monitor predictions in production and trigger alerts/retraining
Automatically log all predictions in a scalable and Kubernetes-based environment, use cnvrg.io to monitor each sample; both input and prediction. Identify anomalies, monitor model decay, data correlation and trigger retraining/alerts automatically
Launch OpenMPI, Horovod and distributed deep-learning jobs in a single click
Launch OpenMPI jobs on any multi-node Kubernetes cluster (cloud/on-prem) in a single click. Use the built-in Kubeflow MPI operator to run your Horovord / TensorFlow distributed training and track performance in real-time using the cnvrg.io dashboard
Announcing CORE, a free ML Platform for the community to help data scientists focus more on data science and less on technical complexity
With cnvrg.io we were able to increase our model throughput by up to 50% and on average by 30% when comparing to RESTful APIs. cnvrg.io also allows us to monitor our models in production, set alerts and retrain with high-level automation ML pipelines
Avi GabayDirector of Architecture at Playtika
As with many data science professionals, our team is hard to please. They want the flexibility to use any language, and the ability to write their own custom packages to improve performance, set configurations, etc. With cnvrg.io, this is the first time I’ve heard our data scientists and analysts say ‘when can we have it’.
Alexander RyabovHead of Data Services & Business Intelligence at Wargaming.net
Working in a hybrid cloud environment has major advantages but can be increasingly complex to manage, especially for AI workloads. cnvrg.io has the potential to enable us to operate in a hybrid cloud environment seamlessly. The new ML infrastructure dashboard could fill a major need in connecting our infrastructure to ML projects. It provides visibility into our on-prem GPU clusters and cloud resources, paving the way for increasing the utilization and ROI of our GPUs.
Bruce KingData Science Technologist at Seagate Technology Advanced Analytics Group
Lightricks is a research-driven company, which is why we’ve built a team with some of the top computer vision researchers in the country. cnvrg.io ensures our highly qualified researchers are focused on building the industry-leading AI technology that we are now world renown for, instead of spending time on engineering, configuration and DevOps.