This paper aims to examine the circuit complexity of sigmoid activation feedforward artificial neural networks by placing them amongst several classic Boolean and threshold gate circuit complexity classes. The starting point is the class NN(k) defined by [Shawe-Taylor et al. (1992)] Classes of feedforward neural nets and their circuit complexity. Neural Networks 5(6), 971-977. For a better characterisation, we introduce two additional classes NN(k)(Delta) and NN(k)(Delta,epsilon) having less restrictive conditions than NN(k) concerning fan-in and accuracy, and proceed to prove relations amongst these three classes and well established circuit complexity classes. For doing that, a particular class of Boolean functions F(Delta) is first introduced and we show how a threshold gate circuit can be recursively built for any f(Delta) belonging to F(Delta). As the G-functions (computing the carries) are f(Delta) functions, a class of solutions is obtained for threshold gate adders. We then constructively prove the inclusions amongst circuit complexity classes. This is done by converting the sigmoid feedforward artificial neural network into an equivalent threshold gate circuit [Shawe-Taylor et al. (1992)]. Each threshold gate is then replaced by a multiple input adder having a binary tree structure, relaxing the logarithmic fan-in condition from ([Shawe-Taylor et al. 1992]) to (almost) polynomial. This means that larger classes of sigmoid activation feedforward neural networks can be implemented in polynomial size Boolean circuits with a small constant fan-in at the expense of a logarithmic factor increase in the number of layers. Similar results are obtained for threshold circuits, and are liked with the previous ones. The main conclusion is that there are interesting fan-in dependent depth-size tradeoffs when trying to digitally implement sigmoid activation feedforward neural networks. Copyright 1996 Elsevier Science Ltd