100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Multi-Agent Systems - Summary Slides Lecture 1 and 2

Rating
-
Sold
-
Pages
24
Uploaded on
30-12-2024
Written in
2022/2023

A summary of lecture 1 and 2 slides for the course Multi-Agent Systems.

Institution
Course










Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
December 30, 2024
Number of pages
24
Written in
2022/2023
Type
Summary

Subjects

Content preview

Lecture 1 - Introduction
What is an Agent?
● An agent is a computer system that is situated in some environment, and that is capable
of autonomous action in this environment in order to meet its delegated objectives




● Note: autonomy is a spectrum!

Multi-Agent Systems, a Definition
● A Multi-Agent System is one that consists of a number of agents that interact (with each
other and the environment)
● In general, agents will have different goals (often conflicting!)
● To successfully interact, they will have to learn, cooperate, coordinate, and negotiate

Agents and Environment




Motivations for studying MAS
● Techological:
○ Growth of distributed, networked computer systems
■ (computers act more as individuals than parts)
○ Robustness: no single point of failure
○ Scalable and flexible:
■ adding new agents when needed
■ asynchronous, parallel processing
○ Development and reusability
■ components developed independently (by specialists)
● Scientific:
○ Models for interactivity in (human) societies,
■ e.g. economics, social sciences
○ Models for emergence of cooperation


1

, ■ Coordination: cooperation among non-antagonistic agents
■ Negotiation: coordination among self-interested agents

Application: Robotics
● Robots as Physical Agents (Embodiment)
○ Internet of Things (IoT)
○ Swarms of drones,
○ Fleet of autonomous vehicles
○ Physical internet

Multiagent Systems: typical scientific questions addressed
● How can cooperation emerge in societies of self-interested agents?
● What actions should agents take to optimize their rewards/utility?
● How can self-interested agents learn from interaction with the environment and other
agents to further their goals?
● How can autonomous agents coordinate their activities so as to cooperatively achieve
goals?

MAS as Distributed AI (DAI)
● AI : Cognitive processes in individuals
○ Inspiration: neuro-science, behaviourism, ...
● DAI: Social processes in groups
○ Inspiration: social sciences, economics, ....
● Basic question in DAI
○ How and when should which agents interact (compete or collaborate) in order to
achieve their design objectives?
● Approaches:
○ Bottom-up: given specific capabilities of individual agents, what collective
behaviour will emerge?
○ Top-down: Search for specific group-level rues (e.g., conventions, norms, etc.)
that successfully constrain or guide behaviours at individual level;

Multiagent Systems is Interdisciplinary
● The field of Multi-Agent Systems is influenced and inspired by many other fields:
○ Economics
○ Game Theory
○ Philosophy and Logic
○ Mathematics (e.g. optimal control)
○ Ecology
○ Social Sciences
● This can be both a strength and a weakness
● This has analogies with Artificial Intelligence itself




2

, Intelligent Agents
● An intelligent agent is a computer system capable of flexible autonomous action in some
environment
● Autonomous: not pre-determined by designer
● By flexible, we mean:
1. Reactive (able to receive information from environment and respond)
2. Pro-active able to reason and/or learn and work towards goals)
3. Social (able to communicate, coordinate, negotiate and cooperate)

Simple Typology for Intelligent Agents
● Intelligence in agents covers a spectrum:
● Reflex agents
○ Simple reflex agents
○ Model-based reflex agents
● Goal based agents
● Utility based agents
● Learning agents

Type 1: Simple Reflex Agent
● Reacts to environment
○ Percept → Action
○ Based on simple if-then rules
(condition-action)
● Properties:
○ No state: ignore history
○ Pre-computed rules
○ NO Partial observability

Type 2: Model-Based Reflex Agent
● Reflex agent with state
● Agent uses memory to store an internal representation of its world
● Internal model based percept history
● This internal model allows him to handle partially observable environment

Type 3: Goal-Based Agent
● Goal = desired outcome
● Goal-based (planning) agents act by reasoning about which actions to achieve the goal
● Less efficient, but more adaptive and flexible
● Search and planning: AI subfields concerned with finding sequences of actions to reach
goal.




3

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
tararoopram Vrije Universiteit Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
26
Member since
3 year
Number of followers
2
Documents
38
Last sold
1 month ago

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions