Toggle Main Menu Toggle Search

Open Access padlockePrints

Dynamic Tsetlin Machine Accelerators for On-Chip Training using FPGAs

Lookup NU author(s): Gang MaoORCiD, Sidharth Maheshwari, Bob Pattison, Dr Zhuang Shao, Professor Rishad ShafikORCiD, Professor Alex YakovlevORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

The increased demand for data privacy and security in machine learning (ML) applications has put impetus on effective edge training on Internet-of-Things (IoT) nodes. Edge training aims to leverage speed, energy efficiency and adaptability within the resource constraints of the nodes. Deploying and training Deep Neural Networks (DNNs)-based models at the edge, although accurate, posit significant challenges from the back-propagation algorithm’s complexity, bit precision trade-offs, and heterogeneity of DNN layers. This paper presents a Dynamic Tsetlin Machine (DTM) training accelerator as an alternative to DNN implementations. DTM utilizes logic-based on-chip inference with finite-state automata-driven learning within the same Field Programmable Gate Array (FPGA) package. Underpinned on the Vanilla and Coalesced Tsetlin Machine algorithms, the dynamic aspect of the accelerator design allows for a run-time reconfiguration targeting different datasets, model architectures, and model sizes without resynthesis. This makes the DTM suitable for targeting multivariate sensor-based edge tasks. Compared to DNNs, DTM trains with fewer multiply-accumulates, devoid of derivative computation. It is a data-centric ML algorithm that learns by aligning Tsetlin automata with input data to form logical propositions enabling efficient Look-up-Table (LUT) mapping and frugal Block RAM usage in FPGA training implementations. The proposed accelerator offers 2.54x more Giga operations per second per Watt (GOP/s per W) and uses 6x less power than the next-best comparable design.


Publication metadata

Author(s): Mao G, Rahman T, Maheshwari S, Pattison B, Shao Z, Shafik R, Yakovlev A

Publication type: Article

Publication status: Published

Journal: IEEE Transactions on Circuits and Systems - I: Regular Papers

Year: 2025

Pages: epub ahead of print

Online publication date: 07/05/2025

Acceptance date: 23/04/2025

Date deposited: 28/04/2025

ISSN (print): 1549-8328

ISSN (electronic): 1558-0806

Publisher: IEEE

URL: https://doi.org/10.1109/TCSI.2025.3564875

DOI: 10.1109/TCSI.2025.3564875

ePrints DOI: 10.57711/b1yq-cz93


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
EP/X036006/1
EP/X039943/1
EPSRC
UKRI-RCN

Share