Transistor-based hardware neural network system: simulation and analysis

Student thesis: PhD Thesis

Abstract

The burgeoning field of artificial intelligence has spurred a shift in computational paradigms, necessitating the development of specialized hardware to support the demanding requirements of neural network processing. This thesis presents a comprehensive study on the design, simulation and analysis of hardware components integral to artificial intelligence systems, focusing on innovation for activation function generator circuits and linear transformation circuits. In this study, we will elucidate the design and performance characteristics of the two types of circuit, and we will provide simulation examples to illustrate their potential as viable alternatives to neural network accelerators.

We commence by detailing the design process of an innovative activation function circuit (AFC) with a pair of complementary metal oxide semiconductor (CMOS) transistors, which produces a novel activation function that exhibits learning performance on par with widely used activation functions employed in machine learning architectures. Through rigorous analysis, we demonstrate the efficacy of our proposed AFC in facilitating efficient information propagation within hardware neural network (HNN) systems.

Subsequently, we introduce our design of the multiply accumulate circuit (MAC), which achieves a state-of-the-art performance in regard of energy efficiency, response time, and silicon footprint in the computation stage of linear transformations operations. The optimisation of the circuit is crucial to reducing the hardware footprint and power consumption, crucial factors in the deployment of HNNsystems, especially in resource-constrained environments.

At the system level, our analysis delves into the accumulation of errors within neural networks, providing information on the propagation and impact of these errors on the overall performance of the network. Recognising the limitations of current training methodologies, we propose potential optimisation algorithms aimed at enhancing the robustness and precision of hardware-based artificial intelligence systems. Preliminary results indicate promising improvements, suggesting the viability of our approach.

Lastly, on the basis of our analysis at the system level, we propose several potential applications of this technology. These include, but are not limited to, real-time artificial intelligence processing in edge devices and advanced decision-making systems where low latency and high computing capabilities are paramount.

In conclusion, our research represents a significant stride towards the realisation of efficient and powerful HNN systems. The innovations in activation function generation and linear transformation, coupled with the systematic analysis of error propagation, pave the way for future advancements in the field, with the potential to have a significant impact on the landscape of artificial intelligence implementation across various industries, particularly with regard to enhancing operational efficiency and enabling modifications in a standalone manner.
Date of Award13 Jul 2025
Original languageEnglish
Awarding Institution
  • University of Nottingham
SupervisorJim Greer (Supervisor) & Amin Farjudian (Supervisor)

Keywords

  • Hardware Neural Networks
  • AI Implementation
  • Neural Network Processing
  • Error Accumulation Analysis
  • Edge Computing

Cite this

'