100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.6 TrustPilot
logo-home
Summary

Summary Multi Agent Systems

Rating
5.0
(1)
Sold
5
Pages
33
Uploaded on
12-12-2020
Written in
2020/2021

A summary of the course MAS on UVT pre master

Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
December 12, 2020
Number of pages
33
Written in
2020/2021
Type
Summary

Subjects

Content preview

Samenvatting MAS


Module 1

Video 1B: agent characteristics.

An agent is something that acts. It is an (distinct portion of) a computer program that represents a
social actor. This could be an individual, organization, firms, avatar, robots, machines, apps, etc.
Typically we would consider an agent a computer agent to operate autonomously, so it makes it
decisions by itself, and not controlled by itself. A computer agent perceives its environments and
tense to persist over a longer period of time. So, basically it observes what is happening in the
environment, reacts on that, and it does that for some time. And often we see agents that adapt to a
change, and a computer agent may create and pursue a goal.

We can consider MAS (ABM). A MAS is a system of multiple agents that are situated in some
environment. They can sense and act in that environment, they communicate with each other and
they solve problems collectively. The most important part of MAS is that often collective behavior in
MAS is more than just the sum of the behavior of individual agents. It is not just adding up.

A MAS has several characteristics:

- Agent design
 Physical VS programmatic
 Heterogeneous VS homogeneous
- Environment
 Static (chess board)
 Dynamic (football field)
 MAS tend to be dynamic
- Perception (what can an agent perceive)
 Information is spatially temporally and semantically distributed
 Partially observable  now, optimal planning may be intractable. So, we need to
take into account that agents do not know everything
- Control (agent needs to do something)
 agents need to decide on their own what to do, so the control is decentralized. There
is no one single program for all agents. Advantage = there is more robustness and
fault tolerance  if one agent has a failure somewhere, then the group still may be
able to pursue the goal. Disadvantage: it is more difficult to divide what an agent
should do, so how to make decisions. Often, people rely on game-theory. Control
requires coordination.
- Knowledge (agents have knowledge)
 Agents have knowledge about the world, and the world includes other agents as
well. But the level of common knowledge may differ. MAS agents should consider
what other agents know.
- Communicate (agents communicate with each other)
 Two-way system: sender and receiver.
 This is necessary for coordination and negotiation.
 Typically by MAS we should think of protocols for heterogeneous agents, that
interact with each other.

Agent types and they’re not mutually exclusive:

, - Simple reflex agents (react to stimuli).
- Model-based reflex agents.
- Goal-based agents.
- Utility-based agents.
- Learning agents.

Video 1C: rational agents.

An agent consist of sensors to perceive what is in the environment. It contains an agent program (or
agent function) and it contains actuators in order to control the activities of the agent.




An agent is situated in a certain environment, and the agent perceive things that are happening in
the environment, as far as observable to the agent. May be all the things in the environment, but
may also be partial.

The info that it is perceived, forms an input to the agent program, the agent program contains rules
or functions to decide on how to act. So, the input is transferred in some actuating control. A MAS
have multiple agent inside the environment, who can interact with each other.

Consider agents playing football: they are designed to optimize the performance in the environment
that they are operating.

A rational agent is an agent whose program is designed to optimize the appropriate performance
measure. A performance measure is a function that evaluates a sequence of environment states (not
agent states). We want this agent to optimize it s behavior in terms of its goals in the environment.
So, considering an agent playing football: the optimal performance measure is to have a ball inside
the goal of the opponent. In order to have that, we could evaluate where the ball is, and the distance
to the goal, etc.  the agent is always evaluating the sequence of the environment states, and tries
to optimize that = rational agent. But, the agent doesn’t know everything and doesn’t know what
other agents are doing. So the agent is never sure about the effect of its actions. So, rather than
optimize the performance measure, it should be optimize the expected performance measure. In
order to improve this, an agent may also need to gather more information. So, rather than deciding
for a certain action, a rational agent may need to apply “looking action”. When it is looking, an agent
can store info in its own memory and use this info to optimize its expected performance.

,Video 1D: simulating language evolution.

Investigate how humans could have involved language by using MAS. Language evolution may be the
hardest problem in science  why do humans have language as we do, but other species (inclusion
our nearest cousins monkeys) do not? And what cultural, cognitive and biological mechanisms are
underlying the evolution of language? The study goal is to investigate under what kind of
mechanisms (evolutionary, social cognitive, etc.) human languages could have evolved?

There are various issues that are important in language evolution. Two of the main aspects:

- Individual agents need to learn language and perform language. All these aspects requires
processing. We do that by our brain, but we can model this in a computer.
- Collective aspect: we share language with many other individuals, so we can interact. But
having interactions with others may also make language change. We also have evolved
biologically to use language.

These aspects interact/influence with each other. Individual performance is based on group
conventions (collective behavior) but collective behavior is caused by individuals. This link is very
hard to investigate.

The complexity of human language evolution is too complex to study using empirical methods only.
So ABM can help address these, because computer simulations don’t have problems with complex
systems. With ABM we can model a population of individuals:

- Individual: learning behavior, language behavior.
- Population: interactions, population dynamics.

Let’s consider a language agent model:




Looking at the population, we can add multiple agents that are
interacting with each other. The population can have a spatial
structure, we can add agents, they can die, social interactions,
etc.

One of the most straightforward models for language evolution
is the naming game. It is a game played between 2 agents. Each

, agent is situated in a context where there are a couple of meanings visible for the agent. A speaker
agent may select a meaning and codes an utterance. The hearer agent tries to decode that utterance
and tries to identify the meaning. Depending on the outcome of the game, they can adapt or can
learn from each other.

So, the learning is depending on the outcome of the game. And if the speaker does not have a word
to express some meaning, it can invent one. Now, if you hear a word, you are not able to encode the
word. In that case you can learn the utterance. If interpretations are successful, if you learn
something, you can reinforce a mapping between the words and a meaning. If an association
between a word and expression is successful, you can increase its score and lower competence
scores  apply lateral inhibition. It can also be that the hearer doesn’t understand the speaker, and
ask for more info. Or maybe we can simulate say: population with N agents that start with no
meanings, no words and we repeat a number of rounds the following:

- Select a random speaker and hearer which interact.
- Both agents focus on a particular context.
- The speaker selects a random object as a topic.
- Agent plays a language game.

And they do this several times. Now, we are interested in
the evolution. The success is logarithmic. We can use this
kind of models to explore different parameters:

- Social network of agents.
- Dynamics of population.
- Language characteristics.
- Learning mechanisms.
- Social interaction styles, etc.

Reviews from verified buyers

Showing all reviews
4 year ago

5.0

1 reviews

5
1
4
0
3
0
2
0
1
0
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
robinvanheesch1 Tilburg University
Follow You need to be logged in order to follow users or courses
Sold
102
Member since
5 year
Number of followers
75
Documents
11
Last sold
1 month ago

4.5

11 reviews

5
7
4
2
3
2
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions