Benelearn 2008

 

 
 

Program

 

The official language of the conference is English. The conference will include two days of technical sessions.
Plenary talks will be given by internationally renowned invited speakers.
Regular sessions (two sessions in parallel where each talk is 20 minutes long including question time) will allow young and senior researchers from the Machine Learning community to present their work.
A social dinner is planned on the first day evening.

Please find the complete Benelearn 2008 program here and proceedings

 

Invited plenary talks

 

Susan A. Murphy (H.E. Robbins Professor of Statistics & Research Professor, Institute for Social Research & Professor in Psychiatry, University of Michigan, USA)

Title: Machine Learning and Reinforcement Learning in Clinical Research

Abstract: This talk will survey some of the possible roles that machine learning researchers can play in informing and improving clinical practice. Clinical decision making, particularly when the patient has a chronic disorder, is adaptive. That is the clinician must adapt and then readapt treatment type, combinations and dose to the waxing and waning of the patient's chronic disorder. This adaption naturally occurs via clinical measurements of symptom severity, side effects, response to treatment, co-occuring disorders, etc. Currently most policies for guiding clinical decision making are informed primarily by expert opinion with an informal use of clinical trial data and clinical databases.
Some challenges in using trial data and databases are (1) there are usually many unknown causes of the patient observations; as a result high quality mechanistic models for the "system dynamics" are found only in very special cases. And (2) clinical databases often include many associations that are not causal; hence a simplistic application of learning methods can lead to gross biases. In addition to the causal issues, measures of confidence are crucial in gaining acceptance of policies constructed from data. Some advances in these areas will be discussed; however all of these are areas in which machine learning scientists could make a great impact.

 

Bill Triggs (Laboratoire Jean Kuntzmann (LJK) and CNRS, Grenoble, France)

Title: Scene Segmentation with Latent Topic Markov Field Models and Classification and Dimensionality Reduction using Convex Class Models

Abstract: The talk will be in two parts. In the first part I will present work with Jakob Verbeek on semantic-level scene segmentation by combining spatial coherence models such as Markov and Conditional Random Fields with latent topic based local image content models such as Probabilisitic Latent Semantic Analysis over bag-of-words representations. In the second part I will present some recent work with Hakan Cevikalp, Frederic Jurie and Robi Polikar on using simple convex approximations to high-dimensional classes for multi-class classification and discriminant dimensionality reduction.

 

Johannes Fuernkranz (Knowledge Engineering Group, TU Darmstadt, Germany)

Title: Preference Learning

Abstract: Preference Learning is a learning scenario that generalizes several conventional learning setttings, such as classification, multi-label classification, and label ranking. In this talk, we will give a brief introduction into this developing research area, and will in the following focus on our work on explicit modeling of pairwise preferences. In this approach, we learn a separate model for each possible pair of labels, which is used to decide which of the two labels is preferred. The predictions of the pairwise models are then combined into an overall ranking of all possible options. The key advantages of this approach lie in the simplicity of the pairwise models, and the possibility to combine the pairwise models in various ways, which allows to minimize different loss functions with the same set of trained classifiers. An obvious disadvantage is the complexity resulting from the need for training a quadratic number of classifiers. However, it can be shown that in many cases this problem can be efficiently solved. We will also briefly discuss extensions of the basic model for multilabel classification, for hierarchical classification, and for ordered classification.

 

Gunnar Rätsch (Friedrich Miescher Laboratory of the Max Planck Society, Tübingen, Germany)

Title: Boosting, Margins and Beyond

Abstract: This talk will survey recent work on understanding Boosting in the context of maximizing the margin of separation. Starting with a brief introduction into Boosting in general and AdaBoost in particular, I will illustrate the connection to von Neumann's Minimax theorem and discuss AdaBoost's strategy to achieve a large margin. This will be followed by a presentation of algorithms which provably maximize the margin, are considerably quicker in maximizing the margin in practice and implement the soft- margin idea to improve the robustness against noise. In the second part I will discuss how these techniques relate to other convex optimization techniques and how they are connected to Support Vector Machines. Finally, I will talk about the effects of the different key ingredients of Boosting and lessons learned from the application of such algorithms to real world problems.