Processing directed acyclic graphs with recursive neural networks

IEEE Trans Neural Netw. 2001;12(6):1464-70. doi: 10.1109/72.963781.

Abstract

Recursive neural networks are conceived for processing graphs and extend the well-known recurrent model for processing sequences. In Frasconi et al. (1998), recursive neural networks can deal only with directed ordered acyclic graphs (DOAGs), in which the children of any given node are ordered. While this assumption is reasonable in some applications, it introduces unnecessary constraints in others. In this paper, it is shown that the constraint on the ordering can be relaxed by using an appropriate weight sharing, that guarantees the independence of the network output with respect to the permutations of the arcs leaving from each node. The method can be used with graphs having low connectivity and, in particular, few outcoming arcs. Some theoretical properties of the proposed architecture are given. They guarantee that the approximation capabilities are maintained, despite the weight sharing.