SFB/FK-427 Medien und kulturelle Kommunikation

The Helmholtz Machine. Memory, Fluctuation, Dreams

Workshop des Kulturwissenschaftlichen Forschungskollegs in Kooperation mit der Kunsthochschule für Medien Köln - Teilprojekt C10

Gast: Prof. Dr. Kevin Kirby (Department of Computer Science, Northern Kentucky University)

27.06.2006, 19.00 Uhr, 28.06.2006, 11.00-16.00 Uhr
Kunsthochschule für Medien, Peter-Welter-Platz 2, 50969 Köln

Overview

In the act of computing, form meets matter. When philosophically-minded computer scientists address this issue, they put some interesting machines on the examination table. These machines are meant to be both minimal and ideal.

In the 1930s Turing, Post and Markov based their machines on humans obeying rules for making and erasing marks on paper. They held them up as foundational, indeed universal. Their operations seem mechanical and concrete, but they are central to the notion of computation as being abstract, formal, immaterial, disembodied.

Since the 1980s, there has been a change, from looking at behavior to looking at stuff. Physics and biology now come into play. One finds frustrated spin glasses, entangled quantum systems, layered neural networks, and evolvable molecular systems. Fluctuation and probability, absent from the earlier models where determinism ruled, are now central. Energy, entropy, temperature and other quantities appear.

The Helmholtz machine is part of this movement. Introduced in papers in the journals Neural Computation and Neural Networks in the mid-1990s, it is a neural network related to the Boltzmann machine, which was introduced a decade earlier. The primary thesis of this seminar is this: Helmholtz machines are underappreciated and underexplained.

A Helmholtz machine tries to explain the world it sees through a medium of fluctuating sense data. It has a layer of neurons that holds flickering probabilistic images generated by world. These images need not be visual in any sense, but it is helpful to use the language of vision here. Think of this layer as a retina. The machine gradually evolves a model of the world behind the images. It does so by maintaining two models: a recognition model (what are the causes of these images?) and a generation model (what images are produced by these causes in the world?). Over time, it tries to make its generation model the inverse of its recognition model. There are two convergent processes here: it must learn to be consistent with the world, and learn to be consistent with itself. This is sometimes called the analysis-by-synthesis paradigm.

To learn, a Helmholtz machine must dream. During the "sleep" phase of its learning algorithm, the machine generates causes from a dreamed version of the world, determined by an evolving probability distribution. These dreams are pushed through its generation model to produce unreal images. These dreamt cause-image associations are used to update its recognition model. When it wakes, the machine then adjusts its generation model. And the sleep/wake cycle proceeds again. As learning proceeds, its dreams acquire more reality.

The Helmholtz machine takes its name from the Helmholtz free energy, a quantity familiar in physical chemistry; this quantity guides the learning process in the machine. The work of Hermann von Helmholtz on the psychophysics of vision was not a reason for the name, yet the accidental connections are mildly fruitful to explore. Accidental connections to his work on physiological acoustics and the psychophysiology of music may also be worth seeking.

During my Tuesday evening presentation, I will lay out the case for why Helmholtz machines are interesting. During the next day's seminar I will dissect the machine and examine its component parts, and run a simple computer simulation. It is not intended to be a highly mathematical seminar, but this can be adjusted on the fly as the participants wish. My goals are two: to make the vague and somewhat overly enriched metaphors used in the above paragraphs more precise, and to show how the Helmholtz machine is a site of convergence of great ideas in computing, mathematics, philosophy, and - who knows? - aesthetics.

Literature

Dayan, P., G.E. Hinton, R.M. Neal, and R.S. Zemel. 1995. The Helmholtz machine. Neural Computation 7:5, 89-904.

Dayan, P. and G.E. Hinton. 1996. Varieties of Helmholtz Machine. Neural Networks 9:8, 1385-1403.

Dayan, P. and L.F. Abbott. 2001. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press. Helmholtz machines form the climax of this textbook.

Dayan, P. 2003. Helmholtz machines and sleep-wake learning. In M.A. Arbib, Ed., Handbook of Brain Theory and Neural Networks, Second Edition. MIT Press.*

Hinton, G.E., P. Dayan, B.J. Frey and R.M. Neal. 1995. The wake-sleep algorithm for unsupervised neural networks. Science 268: 1158-1161.

*) Although quite technical, this is the best short read on the subject to date.

Veranstaltungstyp: Workshops




Zuletzt geändert am 12. Juni 2007 um 12:11 Uhr - Kontakt - Login zum Bearbeiten