It’s only logical

Logical AI algorithms

The choice is simple: a neural network, or an AI architecture that’s faster, more explainable, and more cost-effective? It’s logical, really.

Designed for speed, efficiency, and accuracy—while minimising AI’s environmental impact—Literal Labs’ logic-based AI architecture isn’t just an upgrade. It’s a breakthrough. Models built on it run up to 54× faster than neural networks and 250× faster than XGBoost, all while maintaining ±2% comparative accuracy. Inference runs seamlessly on-device, whether on an MCU or a server. And it doesn’t achieve this by optimising existing models—it redefines AI from the ground up.

Now, we’re giving you the tools to build, train, and deploy logic-based AI models yourself.

Architectural advantages

Logic-based AI isn’t just different—it’s a smarter way forward. Literal Labs has reengineered AI from the ground up, building a novel architecture that delivers speed, efficiency, and explainability without compromise.

Whether on the edge or in the cloud, our models process faster, consume less power, and provide clear, transparent decision-making. Solving an ever-growing range of challenges, this is AI that’s leaner, more scalable, and designed for real-world impact.

Faster AI model Ultra-Fast

Delivers AI inference up to 250× faster. Real-time responses, no delays, no bottlenecks.

Power-efficient AI Power-Friendly

Consumes up to 52× less energy than neural networks—perfect for low-power, battery-driven devices.

Accurate AI model Accurate

Achieves ±2% of neural network accuracy while running leaner, faster, and with far greater efficiency.

Cost-effective AI Cost-Effective

Process data on-device. Cut cloud reliance, slash compute costs, and reduce data transfer overheads.

Reliable AI model Reliable

Runs on proven MCUs with local inference. Works anywhere—even in low-connectivity environments.

Explainable AI Ultra-Explainable

Built for explainable AI. Logic-based architecture ensures transparency, interpretability, and accountability.

Innovate with AI Innovate Faster

Transform IoT devices by embedding AI at the edge, unlocking new capabilities, smart features, and products.

Privacy-first AI Privacy First

Process data locally. Minimise risk. Keep insights secure—without ever compromising performance.

Training logic-based AI models

Logic-based AI’s
Building Blocks

Logic-based AI isn’t one-size-fits-all; each model is precision-engineered for your application’s needs. Through a calculated and benchmarked blend of techniques, each trained model is fine-tuned to maximise speed, efficiency, explainability, and accuracy. Literal Labs’ logic-based architecture strips away computational waste, delivering AI that’s leaner, smarter, and more scalable.

Each model is built from these techniques in different ways—tailored to the task, optimised for performance. The result? AI that’s faster, clearer, and built to scale.

1-bit processing

Minimal processing, maximum efficiency. By reducing computation to just one bit per operation, this technique slashes power demands while delivering lightning-fast AI inference.

Data binarisation

Raw data, reimagined. Data binarisation transforms complex inputs into a streamlined, logical format—enhancing model speed, reducing memory overhead, and improving interpretability.

Propositional logic

AI that thinks in rules, not black boxes. Propositional logic ensures transparent, deterministic decision-making, unlocking AI that’s both explainable and scalable.

Sparsity optimisation

Do more with less. Unique sparsity techniques cut computational redundancy, reducing model size and energy use while maintaining precision.

Tsetlin Machines

Pattern recognition redefined. Unlike traditional AI, Tsetlin Machines learn through rule-based logic, delivering high efficiency without the need for gradient-based optimisation.

Logical AI model training and deployment flow includes the following steps: Data collection from cloud buckets or sensors Processing and pre-processing of the data Training of the models Testing and benchmarking of the models Packaging of the code and libraries necessary for model deployment Deployment of the models The monitoring of the models The collection of new device data and the storage or processing of it
Logical AI model training and deployment flow includes the following steps: Data collection from cloud buckets or sensors Processing and pre-processing of the data Training of the models Testing and benchmarking of the models Packaging of the code and libraries necessary for model deployment Deployment of the models The monitoring of the models The collection of new device data and the storage or processing of it
The training and deployment pipeline for logical and symbolic artificial intelligence models from Literal Labs powers the training, benchmarking, deployment, and monitoring of models trained on your own, or synthetic, datasets.
Deploying logic-based AI models

Deploy Smarter
Scale Effortlessly

Logic-based AI models are designed to deploy seamlessly. They’re not just optimised—they’re engineered to fit their deployment environment perfectly. From automated optimisation to real-hardware benchmarking, every step of our deployment pipeline ensures that you’re deploying AI that performs as fast, efficient, and as cost-effective as possible.

Every model finds its sweet spot, balancing inference speed, energy consumption, and hardware constraints for peak efficiency. No wasted compute, no excess power draw.

Automated translation

Models are automatically converted into optimised low-level code, tailored to the target DSPs, edge, or cloud platforms—ensuring the lowest possible inference latency and energy consumption.

Deployment sweet spot

Logic-based models are rigorously profiled against hardware and deployment constraints to determine the ideal balance of speed, power, and resource efficiency—ensuring every deployment hits peak performance.

Edge to cloud scalability

Deploy on hardware as small as an MCU while scaling effortlessly to server-grade environments. Logical AI adapts to fit, from microcontrollers to high-performance DSPs.

Retrain and redeploy

Update models without full recompilation. Model parameters can be retrained and your system can deploy them in smaller, optimised packages, reducing update complexity on edge devices.

Validated benchmarks

Predictive performance, inference speed, and power efficiency are tested on real hardware through an automated, remote validation process.

Versioning

Track model performance over time, detect drift, and trigger retraining when beneficial—ensuring AI performs accurately, efficiently, and is continuously meeting your use case’s needs.

Find out more

Take the next step

Literal Labs will be launching a training tool empowering companies to train their own logic-based AI models using their own or public datasets.

Fill in the form to receive an alert when we launch the training tool.

If you'd prefer to get in touch sooner to find discuss how logical AI model architecture can help your use case, then please contact us.