EWRL9 (2011)



The 9th European Workshop on Reinforcement Learning (EWRL-9)


will be co-located with ECML PKDD 2011.
When: Sept 9 – 11
Where: Athens Greece

[description] [submission] [dates] [committees] [keynotes] [papers] [registration] [venue] [schedule] [photos] [sponsors]


Description

The 9th European workshop on reinforcement learning (EWRL-9)
invites reinforcement learning researchers to participate in
the revival of this world class event. We plan to make this an
exciting event for researchers worldwide, not only for the
presentation of top quality papers, but also as a forum for
ample discussion of open problems and future research
directions. EWRL9 will consist of four keynote talks,
contributed paper presentations, discussion sessions spread
over a three day period, and a poster session with refreshments
provided on day two.

Reinforcement learning is an active field of
research which deals with the problem of sequential decision
making in unknown (and often) stochastic and/or partially
observable environments. Recently there has been a wealth of
both impressive empirical results, as well as significant
theoretical advances. Both types of advances are of significant
importance and we would like to create a forum to discuss such
interesting results.

The workshop will cover a range of sub-topics including
(but not limited to):

  • Exploration/Exploitation
  • Function approximation in RL
  • Theoretical aspects of RL
  • Policy search methods
  • Empirical evaluations in RL
  • Kernel methods for RL
  • Partial observable RL
  • Bayesian RL
  • Multi agent RL
  • Risk-sensitive RL
  • Financial RL
  • Knowledge Representation in RL

Keynote Speakers


Paper Submission

We are calling for papers (and posters) from the entire reinforcement
learning spectrum, with the option of either 3 page position
papers (on which open discussion will be held) or longer 12
page LNAI format research papers. We encourage a range of
submissions to encourage broad discussion. Accepted papers will
be published in the prestigious Springer LNAI proceedings.

Double submissions are allowed, however in the event that an EWRL paper is accepted to another conference proceedings or journal, copyright restrictions prevent it from being reprinted in the official EWRL Springer LNCS proceedings. The paper would still be considered, however, for acceptance and presentation at EWRL regardless of whether it can be printed in the official proceedings.

We will offer at least one best paper prize of Euro 500.

A selection of papers from EWRL-9 is to be published in the
Springer Lecture Notes In Artificial Intelligence (LNAI/LNCS) series.

Poster Submission

  • Submission deadline: 20th August, 2011
  • Submission by email to ewrl_posters@yahoo.com
  • Format: 1 page extended abstract outlining what your poster will be about.
  • After EWRL, all poster presenters will have the option of submitting a 12 page version of their poster submission for consideration of acceptance to the EWRL LNCS post-proceedings.


Important Dates

  • Paper submissions due: 10 – June – 2011 17 – June – 2011
  • Notification of acceptance: 12 – July – 2011
  • Camera ready due: 19 – July – 2011
  • Poster submission due: 20 – August – 2011
  • Workshop begins: 9 – September – 2011
  • Workshop ends: 11 – September – 2011


Organizing Committee

  • Marcus Hutter (General Workshop Chair)
    Australian National University – Canberra, Australia
  • Matthew Robards (Local Organizing Chair)
    Australian National University – Canberra, Australia
  • Scott Sanner (Program Committee Chair)
    NICTA – Canberra, Australia
  • Peter Sunehag (Treasurer)
    Australian National University – Canberra, Australia
  • Marco Wiering (Miscellaneous)
    University Of Groningen – Groningen, Netherlands

Program Committee

Additional Reviewers



Keynote Speakers’ Abstracts

Peter Auer - University of Leoben – Leoben, Austria

UCRL and autonomous exploration

After reviewing the main ingredients of the UCRL algorithm and its
analysis for online reinforcement learning – exploration vs.
exploitation, optimism in the face of uncertainty, consistency with
observations and upper confidence bounds, regret analysis – I show how
these techniques can also be used to derive PAC-MDP bounds which match
the best currently available bounds for the discounted and the
undiscounted setting. As typical for reinforcement learning, the
analysis for the undiscounted setting is significantly more involved.

In the second part of my talk I consider a model for autonomous
exploration, where an agent learns about its environment and how to
navigate in it. Whereas evaluating autonomous exploration is typically
difficult, in the presented setting rigorous performance bounds can be
derived. For that we present an algorithm that optimistically explores,
by repeatedly choosing the apparently closest unknown state – as
indicated by an optimistic policy – for further exploration.

This talk is based on joint works with Shiau Hong Lim.
The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 231495 (CompLACS).

Kristian Kersting - Fraunhofer IAIS, University of Bonn – Bonn, Germany

Increasing Representational Power and Scaling Inference in Reinforcement Learning

As robots are starting to perform everyday manipulation tasks,
such as cleaning up, setting a table or preparing simple meals,
they must become much more knowledgeable than they are today.
Natural environments are composed of objects, and the possibilities
to manipulate them are highly structured due to the general
laws governing our relational world. All these need to be
acknowledged when we want to realize thinking robots that efficiently
learn how to accomplish tasks in our relational world.

Triggered by this grand vision, this talk discusses the very promising
perspective on the application of Statistical Relational AI techniques
to reinforcement learning. Specifically, it reviews existing symbolic
dynamic programming and relational RL approaches that exploit the symbolic
structure in the solution of relational and first-order logical Markov
decision processes. They illustrate that Statistical Relational AI may
give new tools for solving the ‘scaling challenge’. It is sometimes
mentioned that scaling RL to real-world scenarios is a core
challenge for robotics and AI in general. While this is true in a trivial
sense, it might be beside the point. Reasoning and learning on appropriate
(e.g. relational) representations leads to another view on the
‘scaling problem’: often we are facing problems with symmetries not
reflected in the structure used by our standard solvers. As additional
evidence for this, the talk concludes by presenting our ongoing work on
the first lifted linear programming solvers for MDPs. Given an MDP, our
approach first constructs a lifted program where each variable presents a
set of original variables that are indistinguishable given the objective
function and constraints. It then runs any standard LP solver on this
program to solve the original program optimally.

This talk is based on joint works with Babak Ahmadi, Kurt Driessens,
Saket Joshi, Roni Khardon, Tobias Lang, Martin Mladenov, Sriraam Natarajan,
Scott Sanner, Jude Shavlik, Prasad Tadepalli, and Marc Toussaint.

Peter Stone - University Of Texas – Austin, USA

PRISM – Practical RL: Representation, Interaction, Synthesis, and Mortality

When scaling up RL to large continuous domains with imperfect
representations and hierarchical structure, we often try applying
algorithm that are proven to converge in small finite domains, and
then just hope for the best. This talk will advocate instead
designing algorithms that adhere to the constraints, and indeed take
advantage of the opportunities, that might come with the problem at
hand. Drawing on several different research threads within the
Learning Agents Research Group at UT Austin, I will discuss four types
of issues that arise from these contraints and opportunities: 1)
Representation – choosing the algorithm for the problem’s
representation and adapating the representation to fit the algorithm;
2) Interaction – with other agents and with human trainers; 3)
Synthesis – of different algorithms for the same problem and of
different concepts in the same algorithm; and 4) Mortality – the
opportunity to improve learning based on past experience and the
constraint that one can’t explore exhaustively.

Csaba Szepesvari - University Of Alberta – Alberta, Canada

Towards robust reinforcement learning algorithms

Most reinforcement learning algorithms assume that the system to be controlled can be accurately approximated given the measurements and the available resources. However, this assumption is overly optimistic for too many problems of practical interest: Real-world problems are messy. For example, the number of unobserved variables influencing the dynamics can be very large and the dynamics governing can be highly complicated. How can then one ask for near-optimal performance without requiring an enormous amount of data? In this talk we explore an alternative to this standard criterion, based on the concept of regret, borrowed from the online learning literature. Under this alternative criterion, the performance of a learning algorithm is measured by how much total reward is collected by the algorithm as compared to the total reward that could have been collected by the best policy from a fixed policy class, the best policy being determined in hindsight. How can we design algorithms that keep the regret small? Do we need to change existing algorithm designs? In this talk, following the initial steps made by Even-Dar et al. and Yu et al., I will discuss some of our new results that shed some light on these questions.

The talk is based on joint work with Gergely Neu, Andras Gyorgy and Andras Antos.



Accepted Papers

The following is a list of presentations which will be made at EWRL9.



Registration

We are pleased to announce that registration for EWRL9 is free!
Please simply send the following details to ewrl_registration <at> yahoo.com:

  • Full Name:
  • Email Address:
  • Home Institution:
  • Country:
  • Are you a student (this is simply for our records)?:
  • Do you intend to present a poster?:

(Note that poster presentation is not obligatory, however we would like to encourage all attendees to take the opportunity to present a poster at our fun poster evening. This evening will include free food and drinks.)



Workshop Venue


Athens Royal Olympic Hotel
EWRL9 is co-located with ECML PKDD 2011. It is to be held at Athens Royal Olympic Hotel, which is a family run five star property in the centre of Athens. It lays just in front of the famous Temple of Zeus and the National Gardens. It is underneath the Acropolis and only 2 minutes walk to the new Athens Acropolis Museum.
After its complete renovation that finished in 2009, the Royal Olympic was transformed to an art hotel very elegantly decorated and more important very well looked after in every detail. One of the aspects given particular attention to, was to create a very personal hotel and as much environmentally friendly as possible.



Workshop Schedule

Day 1 – Sept 09:

Day 2 – Sept 10:

Day 3 – Sept 11:



Photos


Keynote speaker Csaba Szepesvari’s world view.


The audience listening in awe.


Keynote speaker Peter Auer’s regret is bounded.


The audience tries to follow his proof.


The organizers are all ears.


Keynote speaker Kristian Kersting indulges in exponential progress.


Keynote speaker Peter Stone’s heavy traffic vision: 12-lanes and 4-way green light.

Poster Evening
Such bold vision requires some lighter refreshments at the buffet.
Poster Evening


Lively discussion at the poster evening.


Lively discussion at the poster evening.


Lively discussion at the poster evening.



Sponsors

We thank the following sponsors for their generous support which allowed us to make the workshop accessible to everyone.

ANU

ANU

ANU

ANU

Springer

One Comment on “EWRL9 (2011)”


  1. [...] EWRL@ECML, Deadline: June 10, 2011 [URL] [...]


Comments are closed.


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: