Lecture 7 (17 & 19 & 24 Nov)
Exploration versus Exploitation
Context
sequential decision making
recurring themes:
States, actions, transitions, policy (how he is going to translate the state he is in into the actions he should
take), value functions (functions that assign value to states and allows to use them in next action);
Back-up, optimization (planning and searching)
In sequential decision making an agent tries to solve a sequential control problem by directly interacting with an
unknown environment
learning by trial and error, agent tries actions to learn their consequences
not supervised: no examples of correct or incorrect behavior; instead only rewards for actions tried
active learning: agent interacts with environment, agent has partial control over what data it will obtain for
learning
on-line learning: it must maximize performance during learning, not afterwards
in game theory it was just selfish agents without a mental state, but looking for the most rational action. From
now on we will look at agents that use mental states to move forward. We will first focus on 1 agent
Preliminaries: Recap of Probability Theory
Stochastic (or random) variables: abstract model the idea of randomly determined numerical outcome from a set of
outcomes.
X: Ω(′ outcomes′ ) →R
P(X=x) probability of x from the set X is ... → plotting this will give density function
Multi-Agent Systems 34
,Multi-Agent Systems 35
, By combining each line getting the mean of the row in the last column/sample mean:
adding more and more of X, it will go to a normal form.
Multi-Agent Systems 36
, the larger the sample the more picked about actual mean, and much more normal distribution:
Multi-Agent Systems 37
Exploration versus Exploitation
Context
sequential decision making
recurring themes:
States, actions, transitions, policy (how he is going to translate the state he is in into the actions he should
take), value functions (functions that assign value to states and allows to use them in next action);
Back-up, optimization (planning and searching)
In sequential decision making an agent tries to solve a sequential control problem by directly interacting with an
unknown environment
learning by trial and error, agent tries actions to learn their consequences
not supervised: no examples of correct or incorrect behavior; instead only rewards for actions tried
active learning: agent interacts with environment, agent has partial control over what data it will obtain for
learning
on-line learning: it must maximize performance during learning, not afterwards
in game theory it was just selfish agents without a mental state, but looking for the most rational action. From
now on we will look at agents that use mental states to move forward. We will first focus on 1 agent
Preliminaries: Recap of Probability Theory
Stochastic (or random) variables: abstract model the idea of randomly determined numerical outcome from a set of
outcomes.
X: Ω(′ outcomes′ ) →R
P(X=x) probability of x from the set X is ... → plotting this will give density function
Multi-Agent Systems 34
,Multi-Agent Systems 35
, By combining each line getting the mean of the row in the last column/sample mean:
adding more and more of X, it will go to a normal form.
Multi-Agent Systems 36
, the larger the sample the more picked about actual mean, and much more normal distribution:
Multi-Agent Systems 37