THEORETICAL RESULTS FOR A CLASS OF NEURAL NETWORKS
The ability to derive minimal network architectures for neural networks has been at the center of attention for several years now. To this date numerous algorithms have been proposed to automatically construct networks. Unfortunately, these algorithms lack a fundamental theoretical analysis of their capabilities and only empirical evaluations on a few selected benchmark problems exist. Some theoretical results have been provided for small classes of well-known benchmark problems such as parity- and encoder-functions, but these are of little value due to their restrictiveness. In this work we describe a general class of 2-layer networks with 2 hidden units capable of representing a large set of problems. The cardinality of this class grows exponentially with regard to the inputs N. Furthermore, we outline a simple algorithm that allows us to determine, if any function (problem) is a member of this class. The class considered in this paper includes the benchmark problems parity and symmetry. Finally, we expand this class to include an even larger set of functions and point out several interesting properties it exhibits.