Tsetlin Machines

Tsetlin machine technology has its roots in the 1960s alongside other machine learning techniques, but has been moved forward by new leaders in the field to meet the future challenges of AI.

Background

History of the Tsetlin Machine

Michael Lvovitch Tsetlin photographed with Victor Varshavsky in 1961
Mikhail Tsetlin (1924-1966), the creator of learning automata theory, and Victor Varshavsky in 1961 planning the school-seminar on automata - Varshavsky was Alex Yakovlev's advisor

Within each of Literal Labs’ AI models sits something known as a Tsetlin machine. It’s one of the foundations which enables our AI models to be naturally explainable while offering state-of-the-art performance.

A Tsetlin machine (TM) is a field of machine learning algorithms, and associated computational architectures, that uses the principles of learning automata, called Tsetlin automata, and game theory to create logic propositions for the classification of data obtained from the environment surrounding this machine. These automata configure connections between input literals, representing the features in the input data, and propositions that are used to produce classification decisions. Then, on the basis of the detection whether the decisions were correct or erroneous, the feedback logic calculates penalties and rewards to the automata.

The TM approach is based on the ideas of the Soviet mathematician Mikhail Tsetlin. At that early stage of development in the 1960s of what was later called artificial intelligence (AI), Tsetlin realised the potential of modelling learning and logic as automata as opposed to detailed models of elementary biological neurons that other researchers had been pursuing then, and which later became neural networks (NNs).

A simple automata as detailed by Michael Tsetlin in his book 'Automation theory and modeling of biological systems'
A Tsetlin automata. From Mikhail Tsetlin's book Automation theory and modeling of biological systems, Volume 102

Tsetlin and his associates including Victor Varshavsky, advisor to Literal Labs' co-founder Alex Yakovlev, over the period from 1960 until Tsetlin's untimely death in 1966, developed theories, models, algorithms, computer programmes and applications that demonstrated the effectiveness of this approach in solving various analysis, optimisation and adaptive control problems in numerous applications from engineering to economics, sociology and medicine.

After 1966, the research branched into various fields including control of complex systems, and circuit design, but, in its holistic form, the Tsetlin approach to AI was left largely untouched.

L Zadeh, J McCarthy, V I Varshavsky, and D A Pospelov photographed in Leningrad in 1977
The International Workshop on Artificial Intelligence in Repino, near Leningrad (April 18 - 24, 1977). From right to left: L. Zadeh is participating in a discussion; J. McCarthy, the computer scientist known as the father of AI and Turing Award winner; V. I. Varshavsky, the Soviet classic in the field of collective behaviour of automata; D. A. Pospelov, the founder of AI in the Soviet Union.
V I Varshavsky seminar in the USSR in the 1980s featuring Literal Labs co-founder Alex Yakovlev
During one of Varshavsky's seminars in Leningrad in the 1980s (Varshavsky standing, Alex Yakovlev sitting right behind him).
The Breakthrough

Next Generation AI

The breakthrough algorithm combining Tsetlin automata with propositional logic was originally published in 2018 by Ole-Christoffer Granmo, chair of Literal Labs' Technical Steering Committee and a professor from Norway's University of Agder. Its operation was initially demonstrated in image recognition by constructing propositions (known as clauses) in logic, based on literals and configuration connections controlled by Tsetlin automata.

The combination of the Tsetlin automata and the propositional logic gives rise to a very efficient (energy and performance) computational model for ML. This process can be used to model complex behaviour of systems in the form of teams or collectives of automata, allowing us to reach the optimal decisions in complex systems with greater reliability and redundancy, and operates on the principles of statistical optimality, physical distribution in space and alleviate the criticalities and anomalies.

The training of Tsetlin automata is achieved through the process of evolution of each automaton through its states, forming a linear sequence. Each state represents the level of confidence of the automaton in performing its actions. The actions of the automaton are associated with two subsets of states, one for an action switching a connection between an input literal and a clause ON and the other for switching it OFF. When the states are organised in a linear sequence we can control this level of confidence by applying simple transitions between the states, thereby either rewarding or penalising the automaton's actions. These actions are somewhat similar to weights in NNs. However, unlike complex multiplication-based weights, the Tsetlin automata “weights” are simple logic signals that control the configuration of input literals in the clauses.

Yakovlev, together with his Literal Labs co-founder Rishad Shafik and their team at Newcastle University, have been working on hardware and software implementation of Tsetlin Machines adding new data representation techniques (e.g. booleanisation and binarisation), parallelisation and compression methods based on indexing input literals, tiled architectures, and hardware-software codesign of ML learning systems. These combinations of techniques amplify the TM advantages by orders of magnitude, including up to 1,000X faster inferencing and orders of magnitude more energy savings than neural networks. They also brought a new level of understanding of the dynamics of machine learning processes by visualising the learning processes and identifying important analytical characteristics of hyperparameters of TMs, such as thresholds on clause voting and feedback activation.

A diagram illustrating how the feedback system works in a Tsetlin machine. Given an observation (training data), the Tsetlin machine decides whether a literal needs to be memorised or forgotten within the resulting model.
A diagram illustrating how the feedback system of a Tsetlin machine. Given an observation (training data), the Tsetlin machine decides whether a literal needs to be memorised. From tsetlinmachine.org
Benchmarking Tsetlin machines

Literal Labs' performance

Interested to learn how Tsetlin machines stack up against other AI technologies, including neural networks? Find out by exploring our models' benchmarks. Our you can learn more about how elements of our technology have been developed by exploring our team's published research papers.