Santosh Vempala - Emergent Computation and Learning from Assemblies of Neurons
From Steven Marzec
Despite breathtaking advances in ML, and in our understanding of the brain at the level of neurons, synapses, and neural circuits, we lack a satisfactory explanation for the brain's performance in perception, cognition, language and behavior; as Nobel laureate Richard Axel put it, ``we do not have a logic for the transformation of neural activity into thought and action''. The Assembly Calculus (AC) is a framework to fill this gap, a computational model whose basic data type is the neural assembly, a large subset of neurons whose simultaneous excitation is tantamount to the subject's thinking of an object, idea, episode, or word. The AC provides a repertoire of operations ("project", "reciprocal-project", "associate", "pattern-complete", etc.) whose implementation relies only on Hebbian plasticity and inhibition, and encompasses a complete computational system. It has been shown, rigorously and in simulation, that the AC can learn to classify samples from well-separated classes. For basic concept classes in high dimension, an assembly can be formed and recalled for each class, and these assemblies are distinguishable as long as the input classes are sufficiently separated. Viewed as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision — all attributes expected of a brain-like mechanism. This talk will describe these and more recent developments for learning and computing with sequences. It will highlight several fascinating questions that arise, from random models of the connectome, to the convergence of assemblies, to their unexpected generalization abilities, to capturing the brain's ease with language.
This is based on joint work with Christos Papadimitriou, Max Dabagia, Mirabel Reid, and Dan Mitropolsky.