Self-organized learning in multi-layer networks

We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature proc
We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning
show moreshow less

Download full text files

Export metadata

  • Export Bibtex
  • Export RIS

Additional Services

    Share in Twitter Search Google Scholar
Metadaten
Author:Rüdiger W. Brause
URN:urn:nbn:de:hebis:30-79048
Document Type:Article
Language:English
Date of Publication (online):2010/09/08
Year of first Publication:1995
Publishing Institution:Univ.-Bibliothek Frankfurt am Main
Release Date:2010/09/08
Tag:cellular neural nets ; evolutionary associative learning; feedforward network layers ; maximal information gain ; restricted Hebbian learning
Source:International journal on artificial intelligence tools, 4, S. 433-451
HeBIS PPN:22757334X
Institutes:Informatik
Dewey Decimal Classification:004 Datenverarbeitung; Informatik
Sammlungen:Universitätspublikationen
Licence (German):License Logo Veröffentlichungsvertrag für Publikationen

$Rev: 11761 $