A New Generation of Artificial Intelligence

Literal Labs applies the Tsetlin machine approach to AI that is faster, explainable, and orders of magnitude more energy efficient than today's neural networks.

Literal Labs’ models are the fastest way to run AI on the edge

Run low-power, AI models up to 250x faster. And do so on-device and on the edge. Enable new business cases and deployments across a whole range of problem domains.

Anomaly detection benchmarks

54x
faster
52x
less energy
Inference per second
Energy use
Literal Labs model
Published Optimised model
Web Summit logo
AI Performance to New Heights
We’re constantly pushing the our AI models. Our latest public benchmarks showcase the incredible performance of Tsetlin Machines against MLPerf Tiny for anomaly detection, as well as separate benchmarks for audio anomaly detection in industrial machinery. Discover how we’re setting new standards in speed, efficiency, and accuracy in real-world environments.
Web Summit logo
Meet Us at Web Summit 2024
Join us at November’s Web Summit in Lisbon! Our CEO and key members of the engineering and product teams will be there, showcasing how Literal Labs is empowering companies to train AI models. As both an Alpha and Impact start-up, we’ll be hosting our stand on 14 November. Come see us and explore the future of fast, energy-efficient AI.

A New Logic for AI
A New Standard for Performance

By building our AI models on a foundation of logic, we’re able to unlock the five key advantages that set our approach apart: ultra-low power use, lightning-fast inference, tiny model size, edge training, and explainable AI. These are the metrics that matter most for today’s critical systems—and we’re building for each of them.
Ultra low power

Tsetlin algorithms are less compute heavy than neural networks, and orders of magnitude lower energy usage per inference with acceleration

High throughput

250X faster inferencing with Tsetlin machine models, and up to 1,000X when accelerated

On chip training

Our technology uniquely enables edge training, without the need for cloud support

Explainable AI

Our architecture enables explainability and ensures accountability for decisions made

Tsetlin Approach

While capable of handling complex machine learning tasks like neural networks, Tsetlin Machines offer a refreshing alternative—one that’s faster, more energy—efficient, and naturally explainable.

Unlike neural networks, which are inspired by biology, the Tsetlin approach is rooted in propositional logic. This makes it far more efficient, streamlining inference and reducing computational complexity, all while consuming significantly less energy.

AI architecture built on the Tsetlin approach by world-leading experts

Supported by

Literal Labs is supported by Newcastle University in the United Kingdom
Silicon Catalyst UK supports Literal Labs
Cambridge Future Tech have invested in Literal Labs

Press features

Tsetlin Machine & AI sustainability Tsetlin machines, energy consumption, and the potential for sustainable AI.
A Step Towards Sustainable AI and Energy Efficiency Revolutionising AI with an energy-efficient, transparent approach.
We're an AI startup to watch Literal Labs featured in EE Times 'Silicon 100,' recognised for pioneering energy-efficient AI.
Strategic Innovations in U.K. AI Startup Leadership How does a startup navigate from the lab to the marketplace?
Call for a 21st Century Industrial Strategy for UK's Tech Ascent "A properly integrated industrial strategy is what will enable British technology startups to truly scale"