Plenary LecturesScene variability and perception constancy in the visual system: a model of pre-processing before data analysis and learning
Cognitive components of digital media
Kernel machines and their applications
Scene variability and perception constancy in the visual system: a model of pre-processing before data analysis and learningProfessor Jeanny Herault
University Joseph Fourier, Grenoble, France
Jeanny HERAULT is professor emeritus of the University Joseph Fourier, Grenoble. He taught signal processing in engineering schools and presently gives a series of lectures in visual perception at the Master's degree of Cognitive Science. Since 1968, his field of research has been concerned with modeling of Natural and Artificial Neural Networks, at various levels: from cellular membrane to large adaptive networks. His interests range from theoretical studies, computer simulations, design and implementation of electronic analog and digital neural machines, to applications in Image and Signal Processing. Since early 90's he has been interested in models of visual perception (retinal processing, motion estimation, color and cortical processing of natural scenes) as well as in high-dimensional data processing through self-organizing neural networks. He is member of several scientific committees, expert of the European Community for Future and Emerging Technologies projects and Scientific Advisor for industrial companies. He is reviewer for international journals in Neural Networks, Signal and Image Processing.
Hell in data analysis is paved (at least) with variability and noise. Is there some lost garden of Eden? Is there some way to approach it? In this talk, I take the example of Human visual perception and I show how our visual system manages to process visual information in such a highly efficient way that it is able to categorize images or scenes within ranges of 100-150 ms, whatever the viewing conditions. In fact, before any high-level recognition task, the visual system unfolds a series of preprocessing stages to reduce image variability:
- In the retina: a first adaptation process to global and local illuminant intensity and color and second one to local contrasts allow to extract information equally in the whole image. A spatio-temporal filter allows a spectral whitening of images so that all spatial frequencies are equally represented.
- In the primary visual cortex: the estimation of the local power frequency spectrum allows a relative insensitivity to image translations. Sampling the frequency spectrum on a log-polar basis and by means of log-normal filters allows to easily process zooms and rotations and also to estimate local perspective.
- In the cortical area V4, a further Fourier transform of the log-polar spectrum provides insensitivity to zooms and rotations, as well as to perspective transformations.
Cognitive components of digital mediaProfessor Lars Kai Hansen
Technical University of Denmark, Denmark
Professor Lars Kai Hansen received the PhD degree in physics from
University of Copenhagen, in 1986. He worked on industrial
machine learning from 1987-1990, with Andrex Radiation Products A/S.
Since 1990 he has been with the Technical University of Denmark,
currently he is head of DTU Informatic's Section for Intelligent Signal Processing.
Lars Kai Hansen is author/co-author of more than 200 papers and book
chapters on adaptive signal processing and machine learning and applications
in bio-medicine and digital media.
Among the exponentially many ways of grouping data, can we characterize the ways that are likely to make sense to a human?. This is a classical research question in psychology -- going back at least to the Gestalt Theorist. Cognitive component analysis is a quantitative research program in which we apply unsupervised learning methods to digital media to understand the conditions under which learned structure is well-aligned with human cognitive activity. In the talk I will introduce machine learning methods for cognitive component analysis and present evidence for cognitive components in abstract data such as text, social interactions, music, and speech. I wil demonstrate a number of applications in human computer interfaces including specialized search engines for music, spoken documents, and neuroimaging data.
Kernel machines and their applicationsProfessor Klaus-Robert Müller
Technical University Berlin, Germany
Klaus-Robert Müller has been Professor for Computer Science
at TU Berlin since 2006; at the same time he is directing the
Bernstein Focus on Neurotechnology Berlin. He studied physics in
Karlsruhe from 1984-89 and also obtained his PhD in Computer Science
at TU Karlsruhe in 1992. After a PostDoc at GMD FIRST in Berlin from
1992-1994, he was a European Community STP Research Fellow at
University of Tokyo from 1994-1995. From 1995 he built up the
Intelligent Data Analysis (IDA) group at GMD FIRST (later Fraunhofer
FIRST) and and directed it until 2008. 1999-2006 he was a Professor
for Computer Science at University of Potsdam. In 1999, he was
awarded the Olympus Prize by the German Pattern Recognition Society,
DAGM and in 2006 he received the SEL Alcatel Communication Award.
He has co-authored more than 250 peer reviewed papers and is active in
numerous program committees and editorial boards. His
research interests are intelligent data analysis, machine learning,
statistical signal processing and statistical learning theory with the
application foci computational finance, computational chemistry,
computational neuroscience and genomic data analysis. Since 2000 one
of his main scientific interests is to study the interface between
brain and machine: non-invasive EEG-based Brain Computer Interfacing.
This lecture provides a very brief introduction to Support Vector Machines as an example for successful kernel-based machine learning (ML) and touches on fundamentally open issues in this field.
Then I review in short selected successful applications of kernel based ML, i.e.for hacker intrusion detection, computational chemistry etc.
The main part of my talk will discuss ML methods for the online analysis of brain signals, i.e. I chart the path towards an EEG-based Brain Computer Interface.
Brain Computer Interfacing (BCI) aims at making use of brain signals for e.g. the control of objects, spelling, gaming and so on. In particular the talk will show the wealth, the complexity and the difficulties of the data available, a truely enormous challenge: In real-time a multi-variate very strongly noise contaminated data stream is to be processed and neuroelectric activities are to be accurately differentiated.
Finally, I report in more detail about the Berlin Brain Computer
(BBCI) Interface that is based on EEG signals and take the audience
all the way from the measured signal, the preprocessing and filtering,
the classification to the respective application. BCI as a
fascinating new channel for man-machine communication is discussed in
a clincial setting and for human machine interaction.
Download handout -->