A modular approximation methodology for efficient fixed-point hardware implementation of the sigmoid functionShow others and affiliations
2022 (English)In: IEEE Transactions on Industrial Electronics, ISSN 0278-0046, E-ISSN 1557-9948, Vol. 69, no 10, p. 10694-10703Article in journal (Refereed) Published
Abstract [en]
The sigmoid function is a widely used nonlinear activation function in neural networks. In this article, we present a modular approximation methodology for efficient fixed-point hardware implementation of the sigmoid function. Our design consists of three modules: piecewise linear (PWL) approximation as the initial solution, Taylor series approximation of the exponential function, and Newton-Raphson method-based approximation as the final solution. Its modularity enables the designer to flexibly choose the most appropriate approximation method for each module separately. Performance evaluation results indicate that our work strikes an appropriate balance among the objectives of approximation accuracy, hardware resource utilization, and performance.
Place, publisher, year, edition, pages
IEEE, 2022. Vol. 69, no 10, p. 10694-10703
Keywords [en]
Artificial neural networks (NNs), FPGA, hardware acceleration, NewtonâRaphson (NR) method, sigmoid function
National Category
Condensed Matter Physics
Identifiers
URN: urn:nbn:se:umu:diva-203247DOI: 10.1109/TIE.2022.3146573ISI: 000790866600099Scopus ID: 2-s2.0-85124247013OAI: oai:DiVA.org:umu-203247DiVA, id: diva2:1727833
2023-01-172023-01-172023-08-28Bibliographically approved