When it comes to benchmarking our AI model architecture, we’re not just comparing ourselves to neural networks. Neural networks are, after all, just one approach to machine learning and AI. Amongst the alternatives sits XGBoost—a powerful gradient boosting algorithm widely used in predictive analytics.
XGBoost, or Extreme Gradient Boosting, builds upon decision trees within an ensemble learning framework. Unlike deep learning, which relies on layers of artificial neurons trained via backpropagation, XGBoost refines predictions iteratively, correcting errors at each step. This makes it exceptionally effective for structured tabular data, where relationships between features are well understood and do not require feature learning from raw data inputs.
Its efficiency in handling large datasets, dealing with missing values, and resisting overfitting through built-in regularisation techniques makes XGBoost a staple in fields such as financial risk modelling, fraud detection, and predictive maintenance. Moreover, its ability to parallelise computations allows it to train rapidly, even on relatively modest hardware.
That raises the key question: how does Literal Labs’ logic-based AI—rooted in propositional logic and Tsetlin machines—compare? With its reinforcement-based learning mechanism and Boolean-driven decision-making, can it outperform traditional machine learning methods like XGBoost on resource-constrained devices?
To answer this, Literal Labs benchmarked our logic-based AI approach against XGBoost across multiple datasets, including Statlog, Sport, Sensorless Drive, Human Activity, Gesture, EMG, and Digits. Both implementations were tested running on an ESP32 microcontroller—chosen specifically for its relevance in real-world TinyML and edge AI applications.
Why ESP32? Because we believe artificial intelligence should be able to run efficiently at the edge, where low-power, low-cost devices can drive real-world impact. By focusing on inference performance on resource-limited hardware, we ensure our models are optimised for deployment beyond high-performance cloud environments.
The results? Literal Labs' logic-based AI outperformed XGBoost, just as it has against neural networks. While XGBoost remains a strong choice for structured data, its applications are limited to classification and regression within tabular datasets. In contrast, our novel architecture delivers broader applicability alongside superior speed, energy efficiency, and explainability—critical factors in real-world edge deployments.
By benchmarking against industry standards and across diverse datasets, we continue to demonstrate how logical AI is not just an alternative to deep learning—it is a compelling advancement in energy-efficient, interpretable, and high-performance machine learning at the edge.
If you'd like to understand how Literal Labs achieved these results versus XGBoost and using 7 different datasets, enter your details below. We’ll send you a PDF white paper breaking down the problem, the approach, our benchmarked results, and details on how Tsetlin machines and logic-based AI might help benefit your own AI efforts and address any model limitations.