Edge AI for anomaly detection with ToyADMOS and MLPerf Tiny benchmark

Download the whitepaper 

Anomalies. Problems. Solutions.

Machinery surrounds us. It powers our societies. When it stops working, the consequences can be expensive or even dire. That’s why predictive maintenance has become crucial—evolving into a $9 billion industry.

Traditionally, maintenance followed a calendar—regular, yet blind to what truly mattered. Today, AI-based, predictive maintenance solutions enable us to maintain machinery exactly when it’s needed—reducing downtime, cutting costs, and ensuring systems stay operational. But the sophistication of such AI models hinges on 2 key elements: IoT sensors and the algorithms interpreting the data they capture.

A common approach

The challenge? How do we know which AI technologies truly excel at anomaly detection? And how do we truly compare different AI technologies? Enter MLCommons and their MLPerf Inference: Tiny Benchmark—a standardised software suite to ensure fair and accurate measurements of AI inference on edge devices.

MLPerf Inference: Tiny allows for companies to objectively measure the key performance metrics of their AI models and their methods of accelerating models. It enables researchers, developers, and users to focus on one commanding idea: performance that actually matters. MLCommons’ anomaly detection benchmarking software ensures taht every key measurement is conducted equally and against an identical dataset—ToyADMOS. The only differences exist in hardware, optimisations made to any model, and, potentially, the model itself. While MLPerf benchmarks utilise a pre-trained model, it’s possible to to use the software to equally benchmark other approaches. And that’s what Literal Labs has done—benchmarking our logical AI model against the neural network version provided by MLCommons. All that comes together to enable customers to understand how an AI will perform if deployed on low-edge edge hardware or a high-end alternative.

Logical results

The proof is in the performance. Literal Labs took the 1.2 version of the benchmark, its novel model technology, and achieved results that are state-of-the-art. Our logical AI models performed 54 times faster and consumed 52 times less energy than the best published, like-for-like results for this benchmark. This is more than an improvement—it’s a leap forward in efficiency.

54x
faster
52x
less energy
7.29
KiB sized model

Download the whitepaper

Processing data 54x faster than optimised neural networks is an achievement, not an anomaly. We’ve chosen to reinvent the AI wheel precisely because we’ve shown we can build a better one. Literal Labs’ logic-based AI models consistently deliver better speeds, use lower energy, have no water cooling demands, and offer fully explainable inference. And they do so without compromising on accuracy.

To understand how Literal Labs achieved winning results on anomaly detection using the MLPerf Tiny Inference benchmark tool, enter your details below. We’ll send you a PDF white paper breaking down the problem, the approach, the ToyADMOS audio dataset, the cost and specifications of the hardware involved, and our benchmarked results.