ML Ops: Beginner

ML Ops | Serve ML models in production | AWS | GCP | FastAPI | gRPC | Docker | Tensorflow | Keras | PyTorch

ML Ops topped LinkedIn’s Emerging Jobs ranking, with a recorded growth of 9.8 times in five years.

What you’ll learn

  • ML Ops introduction.
  • Deploy ML model to AWS & GCP via EC2 and VMs.
  • Use a computer vision model made from PyTorch and Tensorflow frameworks.
  • Make an API utilizing FastAPI.
  • Introduction to gRPC in Python and make your own gRPC API.
  • Docker intro.
  • Take your ML ideas to production.
  • Containerize your ML apps.

Course Content

  • Introduction –> 5 lectures • 9min.
  • PyTorch & Tensorflow –> 5 lectures • 28min.
  • FastAPI & gRPC APIs –> 8 lectures • 1hr 42min.
  • Docker –> 3 lectures • 17min.
  • AWS & GCP Deployment –> 2 lectures • 53min.
  • Conclusion –> 1 lecture • 5min.

ML Ops: Beginner

Requirements

ML Ops topped LinkedIn’s Emerging Jobs ranking, with a recorded growth of 9.8 times in five years.

Most individuals looking to enter the data industry possess machine learning skills. However, most data scientists are unable to put the models they build into production. As a result, companies are now starting to see a gap between models and production. Most machine learning models built in these companies are not usable, as they do not reach the end-user’s hands. ML Ops engineering is a new role that bridges this gap and allows companies to productionize their data science models to get value out of them.

This is a rapidly growing field, as more companies are starting to realize that data scientists alone aren’t sufficient to get value out of machine learning models. It doesn’t matter how highly accurate a machine learning model is if it is unusable in a production setting.

Most people looking to break into the data industry tend to focus on data science. It is a good idea to shift your focus to ML Ops since it is an equally high-paying field that isn’t highly saturated yet.

Learn ML Ops from the ground up! ML Ops can be described as the techniques for implementing and automating continuous integration, continuous delivery, and continuous training for machine learning systems. As most of you know, the majority of ML models never see life outside of the whiteboard or Jupyter notebook. This course is the first step in changing that!

Take your ML ideas from the whiteboard to production by learning how to deploy ML models to the cloud! This includes learning how to interact with ML models locally, then creating an API (FastAPI & gRPC), containerize (Docker), and then deploy (AWS & GCP). At the end of this course you will have the foundational knowledge to productionize your ML workflows and models.

Course outline:

1. Introduction

2. Environment set up

3. PyTorch model inference

4. Tensorflow model inference

5. API introduction

6. FastAPI

7. gRPC

8. Containerize our APIs using Docker

9. Deploy containers to AWS

10. Deploy containers to GCP

11. Conclusion

Get Tutorial