Main repositories of MLOps tools on Github


MLOps was introduced to provide an end-to-end machine learning development process for designing, building, and managing reproducible, testable, and scalable ML-based software. MLOps have enabled organizations to collaborate across departments and speed up workflows, which typically hit the wall due to various production issues. In the next section, we present the best MLOps tool repositories available on Github.

(Image credits: Microsoft Azure)

Here are the best Github MLOps tool repositories:

Seldon core

Seldon Core is an MLOps framework for packaging, deploying, monitoring, and managing thousands of production machine learning models. It converts machine learning models (based on Tensorflow, Pytorch, H2o, etc.) or language wrappers (based on Python, Java, etc.) into production microservices.

Seldon Core makes it possible to scale to thousands of production machine learning models and provides advanced ML functionality including advanced metrics, request logging, explanations, outlier detectors, A testing / B, the Canaries and more. It facilitates deployment with their prepackaged inference servers and language wrappers.

Check out the full repo here.


Polyaxon can be used to create, train, and monitor large-scale deep learning applications. This platform is designed to ensure the reproducibility, automation and scalability of ML applications. Polyaxon can be deployed in any data center, cloud provider, or can be hosted. It supports all major deep learning frameworks like Tensorflow, MXNet, Caffe, Torch, etc.

According to the team that developed Polyaxon, the platform makes it possible to develop ML applications faster, easier and more efficiently by managing workloads with intelligent container and node management. It even turns GPU servers into self-service shared resources for teams.

Installation: $ pip install -U polyaxon

Check out the full repo here.

Hydrosphere at the service

Hydrosphere Serving provides deployment and versioning options for machine learning models in production. This MLOps platform:

  • Can serve machine learning models developed in any language or framework. It wraps them in a Docker image and deploys it to the production cluster, exposing the HTTP, gRPC, and Kafka interfaces.
  • Shade the traffic between different model versions to examine the behavior of different model versions on the same traffic.
  • Releases control models and pipelines as they are deployed.

Check out the full repo here.


Metaflow was originally developed at Netflix to meet the needs of its data scientists who work on demanding real-life data science projects. Netflix’s open-source metaflow in 2019.

Metaflow helps users design your workflow, run it at scale, and deploy it to production. It automatically versions and tracks all your experiences and data. Metaflow provides integrated integrations to storage, compute, and machine learning services in the AWS cloud. No code change required.

Check out the full repo here.

See also
ClearML: MLOps solution without integration


Kedro is an open source Python framework that can be used to create reproducible, maintainable, and modular data science code. Kedro is built on the foundations of software engineering and applies them to machine learning code; concepts applied include modularity, separation of concerns and version management.

Check the full repo here


As a flexible, high-performance framework, BentoML can be used to serve, manage, and deploy machine learning models. It does this by providing a standard interface for describing a prediction service and explaining how to effectively perform model inference and how model service workloads can integrate with cloud infrastructures.

BentoML features include:

  • Online service ready for production.
  • Supports multiple ML frameworks including PyTorch, TensorFlow.
  • Containerized model server for production deployment with Docker, Kubernetes, etc.
  • Automatically discover and package all dependencies.
  • Serve any Python code with trained models.
  • Health check endpoint and Prometheus / metrics endpoint for monitoring.

Check out the full repo here.


Flyte provides production-grade, native and secure container workflow platforms optimized for large-scale processing. It is written in Golang and enables highly concurrent, scalable and maintainable workflows for machine learning and data processing. It connects disparate compute backends using a type-safe data dependency graph and records all changes to a pipeline, allowing time to rewind.

Check out the full repo here.

Join our Telegram group. Be part of an engaging online community. Join us here.

Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.


About Lucille Thompson

Check Also

One year of our mandate, share your impressions!

MedCruise Association, with its 12 administrators, appreciates the visionary approach of its founding Mediterranean ports …

Leave a Reply

Your email address will not be published.