Once the machine learning model is deployed, all DevOps and model maintenance burdens are removed. Seldon Core will automatically balance the workload or traffic between deployed model replicas and the foundational Kubernetes infrastructure will ensure your model pods are happy and healthy. The Seldon Core service communicates with the Fusion ML service to keep track of the available models. Finally, the machine learning stages within pipelines interact with the ML Service and Seldon Core via GRPC protocol to rapidly translate model inputs and outputs to keep your pipelines operating at blazing speeds.