DA Drive Analyzer’s drive failure prediction mechanism is about to get an upgrade. Predictions will soon be made the day following receipt of the data. The new AI will support drive failure predictions for SATA, NVMe, and SCSI drives. It will also consume a host of new predictors without sacrificing speed. The basic algorithm underlying this upgrade is called Light Gradient-Boosting Machine, or LightGBM.

How does LightGBM work?

LightGBM is a machine learning framework commonly used for ranking, and classification. Its advantages like high speed, accuracy, low memory usage, and ability to handle large-scale data make it ideal for use in ULINK DA Drive Analyzer.

LightGBM is an open-source distributed gradient-boosting framework based on decision tree algorithms. Before creating the trees, LightGBM creates histograms with the data, which groups values into bins to yield great advantages on both efficiency and memory consumption. The histograms help the model to identify optimal split points for each leaf. It then grows trees leaf-wise, choosing the leaf it believes will yield the largest decrease in loss.  And when the trees are grown, the prediction of each tree is weighted and then summed up to get a final prediction.

The LightGBM algorithm also utilizes two interesting techniques called Gradient-Based One-Side Sampling (GOSS) which helps when training with class-imbalanced data (which is the case for drive failures) and Exclusive Feature Bundling (EFB) which helps the model to train with less memory.

Gradient-Based One-Side Sampling (GOSS)

GOSS is a method that calculates the loss gradient for each data record and then orders the records by the loss gradients. The records with the largest loss gradients are all retained. On the other hand, the records with smaller loss gradients are randomly downsampled. This helps the model to learn more from the records that it has previously performed poorly on. This leads to better learning of the imbalanced data.

Exclusive Feature Bundling (EFB)

EFB is a smart way to reduce the number of features (i.e., predictors) that the model has to consider. The way this works is if the model notices that two or more features are mutually exclusive, such as a column for whether a drive is an HDD or SSD, and another column for whether the drive disk rotation speed is high or low, it can summarize the information in both the columns into a single column. This feature reduction process makes it faster for the model to work with a large number of features.

Key Benefits of LightGBM

The LightGBM algorithm boasts a range of significant advantages that make it a popular choice for various machine learning tasks. Here’s a breakdown of its key benefits: 

Higher Speed and Accuracy: LightGBM is designed with efficiency in mind. Its unique leaf-wise tree growth strategy significantly accelerates the training process. This swift approach to building decision trees allows LightGBM to process data faster than many other gradient boosting algorithms. Despite its speed, LightGBM maintains a high level of accuracy, ensuring reliable and precise predictions. 

Lower Memory Usage: LightGBM’s histogram-based learning approach enables it to convert continuous feature values into discrete bins. This methodology reduces memory consumption during training and inference compared to other algorithms. The smaller memory footprint makes it more accessible for deployment on resource-constrained devices or in environments with limited memory capacity.

Parallel and Distributed GPU Learning: LightGBM supports parallel computing and GPU acceleration, enabling the algorithm to leverage the power of multiple processors or graphics processing units. This parallelization accelerates both training and prediction, making LightGBM suitable for handling large datasets and complex models.

Handling Large-Scale Data: LightGBM is particularly adept at handling large-scale datasets. Its leaf-wise tree growth strategy and efficient histogram-based approach ensure that the algorithm scales well to substantial volumes of data without sacrificing performance or accuracy. This scalability is crucial for applications dealing with massive datasets.

We hope you can try out our new model soon.

 

QNAP and ULINK Release DA Drive Analyzer, AI-powered Drive Failure Prediction Tool for NAS

Photo Credit: monsitj

 

Latest Versions

DriveMaster Release

DriveMaster 10: v10.0.1850 (New)

Test Suite Release

ULINK NVMe Protocol: v6.0
ULINK NVMe Regression: v5.0

TCG Opal Family Certification: v6.0
TCG Opal Family SSC Multiple Namespaces Protocol Test Suite: v2.5 (New)
TCG Opal Family SSC Application Note: v5.5
TCG Enterprise Application Note: v5.0
ULINK TCG/I1667 Opal Family Protocol: v10.0
ULINK TCG Enterprise Protocol: v5.0

ULINK SATA/ATA Protocol: v10.0 (New)
ULINK SATA/ATA Regression: v9.0
SATA-IO Device Digital: v5.0 (New)
SATA-IO Host Digital: v3.0

ULINK SAS/SCSI Protocol: v4.5
ULINK SAS/SCSI Regression: v5.0

Test Reporter

v4.4.1