An Empirical Investigation of Missing Data Handling in Cloud Node Failure Prediction
Cloud computing systems have become increasingly popular in recent years.
A typical cloud system utilizes millions of computing nodes as the basic infrastructure.
Node failure has been identified as one of the most prevalent causes of cloud system downtime.
To improve the reliability of cloud systems, many previous studies collected monitoring metrics from nodes and built models to predict node failures before the failures happen.
However, based on our experience with large-scale real-world cloud systems in Microsoft, we find that the task of predicting node failure is severely hampered by missing data.
There is a large amount of missing data, and the online latest data utilized for prediction is even worse.
As a result, the real-time performance of the node prediction model is limited.
In this paper, we first characterize the missing data problem for node failure prediction.
Then, we evaluate several existing data interpolation approaches, and find that node dimension interpolation approaches outperform time dimension ones and deep learning based interpolation is the best for early prediction.
Our findings can help academics and engineers address the missing data problem in cloud node failure prediction and other data-driven software engineering scenarios.