Evaluating Scalability of Neural Configurations in Combined Classifier and Attention Models
The brain’s neuronal circuits that are responsible for recognition and attention are not completely understood. Several potential circuits have been proposed using different mechanisms. These models may vary in the number connection parameters, the meaning of each connection weight, the efficiency, and the ability to scale to larger networks. Explicit analysis of these issues is important because for example, certain models may require an implausible number of connections (greater than available in the brain) in order to process the amount of information the brain can process. Moreover certain classifiers may perform recognition, but may be difficult to efficiently integrate with attention models. In this chapter, some of the limitations and scalability issues are discussed and a class of models that may address them is suggested. The focus is on modeling both recognition and a form attention called biased competition. Models are also explored that are both static and dynamic during recognition.