IJSMT Journal

International Journal of Science, Strategic Management and Technology

An International, Peer-Reviewed, Open Access Scholarly Journal Indexed in recognized academic databases · DOI via Crossref The journal adheres to established scholarly publishing, peer-review, and research ethics guidelines set by the UGC

ISSN: 3108-1762 (Online)
webp (1)

Plagiarism Passed
Peer reviewed
Open Access

APPROXIMATE SOFTMAX ARCHITECTURE FOR ENERGY-EFFICIENT DEEP NEURAL NETWORKS

AUTHORS:
Athul Krishna R,
Deepak S J
Fayas Mohamed A
Mentor
Affiliation
Department of Electronics and Communication Engineering Kongunadu College of Engineering and Technology, Trichy
CC BY 4.0 License:
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

Softmax is commonly used in neural network- based classification systems to convert output values into probabilities, but its conventional implementation involves complex operations such as exponentials and division, which are inefficient for FPGA and VLSI-based hardware due to high area, power, and latency requirements. This project presents a hardware- efficient approximate softmax architecture designed for low-power FPGA systems using a simplified Top-1 approximation approach. The proposed design identifies the dominant output class using comparator-based logic and fixed- point arithmetic, thereby eliminating computationally expensive operations while maintaining correct decision-making. The approximate softmax module is implemented using synthesizable SystemVerilog and validated through simulation and synthesis using Xilinx Vivado, with Python used only for basic numerical verification. The developed design is hardware-ready and suitable for integration into FPGA-based neural network accelerators and embedded VLSI systems, with scope for future hardware implementation.

Keywords
Article Metrics
Article Views
42
PDF Downloads
0
HOW TO CITE
APA

MLA

Chicago

Copy

R,, A. K., J, D. S. & A, F. M. (2026). Approximate Softmax Architecture for Energy-Efficient Deep Neural Networks. International Journal of Science, Strategic Management and Technology, 02(03). https://doi.org/10.55041/ijsmt.v2i3.262

R,, Athul, et al.. "Approximate Softmax Architecture for Energy-Efficient Deep Neural Networks." International Journal of Science, Strategic Management and Technology, vol. 02, no. 03, 2026, pp. . doi:https://doi.org/10.55041/ijsmt.v2i3.262.

R,, Athul,Deepak J, and Fayas A. "Approximate Softmax Architecture for Energy-Efficient Deep Neural Networks." International Journal of Science, Strategic Management and Technology 02, no. 03 (2026). https://doi.org/https://doi.org/10.55041/ijsmt.v2i3.262.

References
1.Chen, Z. Du, N. Sun, J. Wang, C. Wu,

2.Chen and O. Temam, “DianNao: A Small- Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning,” ACM SIGARCH Computer Architecture News, vol. 42, no. 1, pp. 269–284, 2014.

3.LeCun, Y. Bengio and G. Hinton, “Deep Learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.

4.Rastegari, V. Ordonez, J. Redmon and

5.Farhadi, “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks,” European Conference on Computer Vision, vol. 9908, no. 1, pp. 525–542,2016.

6.Mittal, “A Survey of Techniques for Improving Energy Efficiency in Deep Neural Networks,” International Journal of Computer Vision and Image Processing, vol. 6, no. 4, pp. 1–21, 2016.

7.Sze, Y. H. Chen, T. J. Yang and J. S. Emer, “Efficient Processing of Deep Neural Networks: A Tutorial and Survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329,2017.

8.P. Jouppi et al., “In-Datacenter Performance Analysis of a Tensor Processing Unit,” IEEE Micro, vol. 38, no. 2, pp. 10–19,2018.

9.Shatravin, D. Shashev and S. Shidlovskiy, “Implementation of the Softmax Activation for Reconfigurable Neural Network Hardware Accelerators,” Applied Sciences, vol. 13, no. 23, pp. 12784–12795, 2023.

10.Kim, D. Lee, J. Kim, J. Park and S. E. Lee, “Hardware Accelerator for Approximation- Based Softmax and Layer Normalization in Transformers,” Electronics, vol. 14, no. 12, pp. 2337–2347, 2025.

 
Ethics and Compliance
✓ All ethical standards met
This article has undergone plagiarism screening and double-blind peer review. Editorial policies have been followed. Authors retain copyright under CC BY-NC 4.0 license. The research complies with ethical standards and institutional guidelines.
Indexed In
Similar Articles
A Study on Logistics Services Quality in Freight Forwarding on Southern Freight Carriers
string(12) "A.Mathivanan" A.Mathivanan,
(2026)
DOI: 10.55041/ijsmt.v2i3.269
Ad Campaign Tracker
string(11) "Yoheswari S" S, Y.et al.
(2026)
DOI: 10.55041/ijsmt.v2i4.019
Design and Performance Analysis of Greywater Treatment using Multilayer Biofiltration
string(8) "Kaviya T" T, K.et al.
(2026)
DOI: 10.55041/ijsmt.v2i3.397
Scroll to Top