100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

[ Intelligent Systems ] Summary | Lecture Notes

Rating
2.5
(2)
Sold
13
Pages
17
Uploaded on
03-04-2018
Written in
2017/2018

Summary of 17 pages for the course Intelligent Systems at VU

Institution
Course










Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
April 3, 2018
Number of pages
17
Written in
2017/2018
Type
Summary

Subjects

Content preview

INTELLIGENT SYSTEMS
LECTURE 1 – INTRODUCTION
MACHINE AS AN INTELLIGENT AGENT

Main methods to optimize for the best expected outcome: Text/image processing, data integration,
knowledge representation, automatic reasoning, machine learning and genetic programming (writing code to
improve its own behavior).

Philosophical questions: What is consciousness/free will? Can a machine be creative/have rights?
Ethical questions: surveillance/privacy, responsibility, accountability, using/misusing IS.

CORE TOPICS
• Topic 1: Search and heuristics (principal of rationality)
• Topic 2: Knowledge
• Topic 3: Adaptivity (machine learning)


STATE SPACE REPRESENTATION

Representing problems as state spaces problems (because the world is absurdly complex) à state spaces must
be abstracted/simplified for problem solving à (Abstract) solution = set of real paths that are solutions of the
real world.
1. What actions and states to consider, given a goal:
o What are appropriate states and initial states? (atomic)
o What are the possible actions to move between states? (atomic)
o What is the cost of an action?
2. Goal formulation: what are the successful world states?
3. Search: determine possible sequences of actions that lead to goal states and choose the “best”
sequence.
4. Execute: Give the solution and perform the actions.

Example - Romania:
⁃ Concrete problem: In Arad, flight from Bucharest.
o States: various (representation of) cities
o Actions: drive between cities
⁃ Concrete goal: Be in Bucharest
⁃ Find solution: sequence of best states (Arad à Sibiu à
Fagaras à Bucharest)
⁃ Execute plan: buy tickets + travel

SEARCH

Search through explicit tree generation:
1. ROOT = initial state
2. Nodes and leafs generated through successor function (related to action)

State = a (representation of) a physical configuration
Node = a data structure belonging to a search tree




1

,SEARCH STRATEGIES
Strategy = picking order of node expansion. Performance is measured in 4 ways:
1. Completeness: does it always find a solution if one exists?
2. Optimality: does it always find the least-cost solution?
3. Time Complexity: number of nodes generated/expanded?
4. Space Complexity: number of nodes stored in memory during search?
a. Time & Space complexity are measured in terms of problem difficulty:
i. b – maximum branching factor of the search tree (amount of possible choices)
ii. d – depth of the least-cost solution (best solution: best distance between root)
iii. m – maximum depth of the state space (may be infinite)


UNINFORMED SEARCH STRATEGIES

Uninformed search strategies (a.k.a. blind search) = use only information available in problem definition:
• Breadth-first search = expand SHALLOWEST unexpanded node
o Implementation: fringe is a FirstInFirstOut (FIFO) queue (A à B à C à D à E à F…)
§ Completion: Yes, if b is finite
§ Time complexity: Yes
§ Space complexity: Yes, if each node is retained in memory
§ Optimality: Yes, unless actions have different costs
• Disadvantage: longest path
• Depth-first search = expand DEEPEST unexpanded node
o Implementation: fringe is a LIFO queue (= stack) (A à B à D à H & I à E à J & K à C…)
§ Completion: No, unless search space is finite and no loops are possible
§ Time complexity: Yes, if many solutions, a bit faster than BF
§ Space complexity: Yes, backtracking search uses even less memory
§ Optimality: No, unless search space is finite and no loops are possible
• Depth-limited search = DF-search with depth limit l
§ Completion: Yes (if I >= d)
§ Optimality: No
• Depth-first Iterative Deepening search = Breadth-F + Depth search
§ Completion: Yes (no infinite paths)
§ Space Complexity: Yes
§ Optimality: Yes
• Bidirectional search
• Uniform-cost search




2

, LECTURE 2 – INFORMED & ADVERSARIAL SEARCH
INFORMED SEARCH (HEURISTICS)

General approach of informed search: best-first search (node is selected for expansion based on an evaluation
function f(n). Fringe = queue sorted in decreasing order of desirability.

Heuristic function = educated guess that reduces or limits the search for solutions in domains that are difficult
and poorly understood. Describes the expected quality of a node.
• h(n) = estimated cost of the cheapest path from node n to goal node.
• If n is goal, then h(n) = 0.
• Complete knowledge: often not available & too expensive to calculate
• Use partial knowledge/human expertise = educated guess (can be completely wrong)

Two elements, h1(n) and h2(n), with 0 < h1(n) < h2(n) < h *(n), then h2(n) is more informed than h1(n).
• Perfect heuristics: h(n) = h*(n)
• Trivial heuristics: h(n) = 0
o Advantage: With h1 fewer nodes have to be searched with h2
o Disadvantage: h2 is often more expensive to calculate.


ADVERSARIAL SEARCH

Games are a form of multi-agent environment. Competitive multi-agent environments give rise to adversarial
problems a.k.a. games. Easy to represent and agents restricted to small number of actions.


SEARCH – NO ADVERSARY
• Solution is (heuristic) method for finding goal
• Heuristics and CSP techniques can find optimal solution
• Evaluation function: estimate cost from start to goal
• Examples: path planning, scheduling activities

GAMES – ADVERSARY
• Solution is strategy (strategy specifies moves for every possible opponent reply)
• Time limits force an approximate solution
• Evaluation function: evaluate “goodness” of game position
• Examples: chess, checkers, Othello, backgammon, GO

Deterministic: you know where the figures are.
Chance: you know where the figures are, but there is always an element of chance.

MINMAX

• Two players: MAX and MIN
• MAX moves first (uses search tree to determine next move), and they take turns
until the game is over. Winner à reward & loser à penalty.
• Games as search
o Initial state: e.g. board configuration of chess
o Successor function: list of (move, state) pairs specifying legal moves
o Terminal test: is the game finished?
o Utility function: gives numerical value of terminal states
§ win (+1), lose (-1), draw (0) in tic-tac-toe




3

Reviews from verified buyers

Showing all 2 reviews
3 year ago

5 year ago

2.5

2 reviews

5
0
4
1
3
0
2
0
1
1
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
gg2000 Vrije Universiteit Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
97
Member since
10 year
Number of followers
85
Documents
16
Last sold
2 year ago

3.8

16 reviews

5
3
4
10
3
1
2
1
1
1

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions