Created with Dall-E: “A scientist in anime style, intensely studying multiple computer screens displaying complex data streams.”

The robustness of Machine Learning (ML) systems is more than just good engineering practice; it’s a critical necessity.

Today, I would like to share a link that explores this theme within Meta, focusing on ML engineering and, in a way, similar challenges that we face daily at iFood.

Meta’s approach to machine learning prediction robustness

When we think about ML, many of us envision the process of data analysis, feature generation, training, and finally, prediction. However, in scenarios involving companies with large products, we deal with dozens or hundreds of models that utilize thousands of features from various sources. This elevates the importance of considering not just the health but the robustness of the models and their predictions.

Why is robustness crucial?

Even if a model is operating correctly, responding in the desired time and format, it might still be generating poor or unexpected predictions. The Meta article I am sharing today addresses exactly this, highlighting the complexity of robustness in ML systems due to the stochastic nature of models and their data dependency.

I really liked this image from them.

Here are some key points that all of us working with ML should consider to ensure the robustness of our systems:-

  • Model Robustness: Every model, before going into production, even if it is a retraining, should be validated against a “hold-out set” to ensure it is delivering predictions as expected.
  • Feature Robustness: It is crucial to continuously detect anomalies and drift in the input data. For example, if a model assessing the purchase value of a car receives an input of 10 reais, this clearly indicates an error, as no car should cost less than a reasonable minimum value.
  • Label Robustness: Checking whether the distribution of labels still makes sense is essential, especially if there have been changes in the training data that could affect the outcome.
  • Prediction Robustness: Just like labels and features, the behavior of predictions should be consistent over time. Implementing monitoring systems for model outputs is extremely important.
  • ML Interpretability: Interpretability tools, such as SHAP values and feature importance, are indispensable in complex ML systems to understand and explain model behavior.

Conclusion
ML systems are today key components of many applications and products. Therefore, it is essential to continuously monitor their health and quality. The robustness of ML models is not as simple or automatic as in other technological systems and requires constant vigilance.It’s not enough to analyze the model when it was first trained; we must always monitor the features, labels, and predictions.

--

--

Luiz Felipe Mendes
Luiz Felipe Mendes

Written by Luiz Felipe Mendes

Co-founded Hekima, an AI company acqui-hired in 2020 by iFood. Head of Data Science at iFood — Recommendation and Search

No responses yet