Tsetlin Machines

Tsetlin machine technology has its roots in the 1960s alongside other machine learning techniques, but has been moved forward by new leaders in the field to meet the future challenges of AI.

Background

History of the Tsetlin Machine

Michael Lvovitch Tsetlin photographed with Victor Varshavsky in 1961
Mikhail Tsetlin (1924-1966), the creator of learning automata theory, and Victor Varshavsky (1933-2005) in 1961 planning the school-seminar on automata - Varshavsky was Alex Yakovlev's advisor

At the heart of every Literal Labs AI model lies something extraordinary: the Tsetlin Machine. This powerful yet elegantly simple machine learning architecture enables our models to deliver state-of-the-art speed and ultra-low power consumption all the while being naturally explainable and interpretabile.

The Tsetlin Machine is not just another machine learning algorithm. It draws on principles laid down by Mikhail Tsetlin, a visionary Soviet mathematician, who in the 1960s explored a radically different path to artificial intelligence. Rather than mimicking biological neurons, Tsetlin’s approach was rooted in learning automata and game theory. He recognised that logic—expressed through what we now call Tsetlin automata—could classify data more efficiently, forging a new direction in AI.

Together with Victor Varshavsky, advisor to Literal Labs' co-founder Alex Yakovlev, Tsetlin developed theories, algorithms, and applications that solved problems across fields from engineering to economics, sociology and medicine. Despite his early death in 1966, the research first spurred by Tsetlin branched out into various fields including control systems and circuit design. But the AI element of the Tsetlin approach to AI laid dormant—until now.

How Tsetlin Machines work

A Tsetlin machine is a field of machine learning algorithms, and associated computational architectures, that uses the principles of learning automata, called Tsetlin automata, and game theory to create logic propositions for the classification of data obtained from the environment surrounding this machine. These automata configure connections between input literals, representing the features in the input data, and propositions that are used to produce classification decisions. Then, on the basis of the detection whether the decisions were correct or erroneous, the feedback logic calculates penalties and rewards to the automata.

A simple automata as detailed by Michael Tsetlin in his book 'Automation theory and modeling of biological systems'
A Tsetlin automata. From Mikhail Tsetlin's book Automation theory and modeling of biological systems, Volume 102
L Zadeh, J McCarthy, V I Varshavsky, and D A Pospelov photographed in Leningrad in 1977
The International Workshop on Artificial Intelligence in Repino, near Leningrad (April 18 - 24, 1977). From right to left: L. Zadeh is participating in a discussion; J. McCarthy, the computer scientist known as the father of AI and Turing Award winner; V. I. Varshavsky, the Soviet classic in the field of collective behaviour of automata; D. A. Pospelov, the founder of AI in the Soviet Union.
V I Varshavsky seminar in the USSR in the 1980s featuring Literal Labs co-founder Alex Yakovlev
During one of Varshavsky's seminars in Leningrad in the 1980s (Varshavsky standing, Alex Yakovlev sitting right behind him).
The Breakthrough

Next Generation AI

The breakthrough algorithm combining Tsetlin automata with propositional logic was originally published in 2018 by Ole-Christoffer Granmo, chair of Literal Labs' Technical Steering Committee and a professor from Norway's University of Agder. Its operation was initially demonstrated in image recognition by constructing propositions (known as clauses) in logic, based on literals and configuration connections controlled by Tsetlin automata.

The combination of the Tsetlin automata and the propositional logic gives rise to a very efficient (energy and performance) computational model for ML. This process can be used to model complex behaviour of systems in the form of teams or collectives of automata, allowing us to reach the optimal decisions in complex systems with greater reliability and redundancy, and operates on the principles of statistical optimality, physical distribution in space and alleviate the criticalities and anomalies.

The training of Tsetlin automata is achieved through the process of evolution of each automaton through its states, forming a linear sequence. Each state represents the level of confidence of the automaton in performing its actions. The actions of the automaton are associated with two subsets of states, one for an action switching a connection between an input literal and a clause ON and the other for switching it OFF. When the states are organised in a linear sequence we can control this level of confidence by applying simple transitions between the states, thereby either rewarding or penalising the automaton's actions. These actions are somewhat similar to weights in NNs. However, unlike complex multiplication-based weights, the Tsetlin automata “weights” are simple logic signals that control the configuration of input literals in the clauses.

Yakovlev, together with his Literal Labs co-founder Rishad Shafik and their team at Newcastle University, have been working on hardware and software implementation of Tsetlin Machines adding new data representation techniques (e.g. booleanisation and binarisation), parallelisation and compression methods based on indexing input literals, tiled architectures, and hardware-software codesign of ML learning systems. These combinations of techniques amplify the TM advantages by orders of magnitude, including up to 1,000X faster inferencing and orders of magnitude more energy savings than neural networks. They also brought a new level of understanding of the dynamics of machine learning processes by visualising the learning processes and identifying important analytical characteristics of hyperparameters of TMs, such as thresholds on clause voting and feedback activation.

A diagram illustrating how the feedback system works in a Tsetlin machine. Given an observation (training data), the Tsetlin machine decides whether a literal needs to be memorised or forgotten within the resulting model.
A diagram illustrating how the feedback system of a Tsetlin machine. Given an observation (training data), the Tsetlin machine decides whether a literal needs to be memorised. From tsetlinmachine.org
Benchmarking Tsetlin machines

Literal Labs' performance

Interested to learn how Tsetlin machines stack up against other AI technologies, including neural networks? Find out by exploring our models' benchmarks. Our you can learn more about how elements of our technology have been developed by exploring our team's published research papers.