Newcastle upon Tyne, United Kingdom, 10th October, 2024 — Literal Labs, a spin-out that specialises in Tsetlin machine AI architecture, today announced record-breaking results on the MLCommons MLPerf Inference: Tiny Anomaly Detection Benchmark. This benchmark showcases the capabilities of Literal Labs’ unique AI technologies, highlighting their AI models’ abilities to address a critical challenge in industrial environments: anomaly detection in machine operating sounds. Using the ToyADMOS dataset, which focuses on identifying irregularities in audio data—a vital aspect of predictive maintenance—Literal Labs has delivered unparalleled performance, outpacing state-of-the-art results in speed, energy efficiency, and model size, all while operating on affordable, off-the-shelf microcontrollers.
Solving the Challenge of Anomaly Detection in Industrial Equipment
Anomaly detection, particularly within audio-based datasets, is key to the success of predictive maintenance systems that aim to pre-empt machine failures before they occur. The ToyADMOS dataset, an audio dataset representative of machine health problems, helps evaluate how AI models detect subtle irregularities or outliers in these audio streams—signals that could indicate early signs of equipment malfunction or wear. Such anomaly detection AI models are crucial for extending the lifecycle of industrial machinery and reducing unplanned downtime.
Literal Labs’ anomaly detection model has been benchmarked against this dataset to demonstrate its superior performance. Their model was specifically optimised to run on constrained, low-cost hardware platforms, such as edge IoT devices used in industrial settings, where resources such as processing power and energy are limited.
Breakthrough Edge AI Performance
Through its Tsetlin machine approach to AI modelling, Literal Labs achieved 54 times faster inference than the best published results on the same hardware platform. Literal Labs selected an ARM Cortex-M7 processor based platform from STMicroelectronics as its reference platform and created an autoencoder model to compete with the fully connected autoencoder neural network model specified in the MLPerf benchmarks. This platform was selected asit reflects the type of affordable, real-world hardware commonly found in edge deployments. The results underline Literal Labs’ commitment to providing highly scalable AI solutions without requiring specialised, expensive hardware.
In terms of energy efficiency, the Literal Labs’ model consumed 52 times less energy than the leading benchmarked models across all hardware configurations, enabling significant reductions in energy consumption—a critical factor for devices deployed in resource-constrained environments, where battery life and power availability are often limiting factors. Literal Labs’ models are uniquely positioned to solve the energy-efficiency challenges of AI applications running at the edge, particularly for industrial IoT devices.
Driving Innovation in Predictive Maintenance
“These results reflect our focus on solving industry-critical problems while minimising the cost outlay and environmental impact typical of AI-powered solutions,” said Leon Fedden, CTO of Literal Labs. “Audio-based anomaly detection in industrial equipment is a perfect example of where scalable, efficient AI can make a profound impact. Our models provide unprecedented performance for detecting irregularities in machine sounds, all while running on low-cost hardware. This is game-changing for predictive maintenance in sectors ranging from manufacturing to logistics, where downtime is costly and efficiency is paramount.”
The results achieved by Literal Labs underscore the growing need for AI solutions that can operate efficiently in resource-constrained environments. By reducing both latency and energy consumption while maintaining high accuracy, Literal Labs is poised to redefine how AI is deployed at the edge, particularly in edge applications where real-time anomaly detection is vital.
The company has published an anomaly detection whitepaper explaining more on its website.
Published: 10 Oct 2024 by Literal Labs