Google has announced the general availability of its TabNet algorithm. As per Google, TabNet does not require any coding and creates an integrated tool chain, easing the process of running training jobs on user’s data. TabNet has been made available as a built-in algorithm on Google Cloud AI Platform.
What is TabNet?
In Google’s words, TabNet uses a Machine Learning (ML) technique called Sequential Attention. It combines the best of data training worlds — similarity to tree-based models (Explainability) and similarity to deep neural networks (High Performance). The California giant’s goal is to enable scientists and engineers with limited knowledge of coding, benefit from the potential of building ML models. TabNet also boasts of hyperparameter tuning, which enables users achieve high performance while being unaware of the technical depth involved.
How Does Tabnet Work?
Here’s how Google describes TabNet’s working in the blog post announcing its general availability, “TabNet uses a machine learning technique called sequential attention to select which model features to reason from at each step in the model. This mechanism makes it possible to explain how the model arrives at its predictions and helps it learn more accurate models. Thanks to this design, TabNet not only outperforms other neural networks and decision trees but also provides interpretable feature attributions. Releasing TabNet as a built-in algorithm means you’ll be able to easily take advantage of TabNet’s architecture and explainability and use it to train models on your own data.”
Leveraging the aforementioned Explainability and High Performance, TabNet makes sure that the tabular data being trained with it is not just appropriately interpretable, but also benefits from the deep-learning aspect of High Performance neural networks. Google defines this as a “decision-tree-like” mapping of data. Thus, users of TabNet can benefit from the interpretability and quicker pace of tree-based approach, while also leveraging the performance boost through deep learning architectures.
The aforementioned hyperparameter tuning also aims at making the task of tuning easier while the users of TabNet look for robust data models. The added advantage of high-level interpretability and performance boost as stated by Google, surely makes TabNet an exciting launch in the Tabular Learning sector. The aforementioned capabilities of TabNet can potentially give an edge to retailers, finance, and insurance industry applications in areas such as predicting credit scores, fraud detection, and forecasting. Those interested in knowing about the operability of TabNet can visit Google’s quick start here.