100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary Introduction to Artificial Intelligence 1, Endterm

Rating
-
Sold
3
Pages
59
Uploaded on
27-05-2023
Written in
2022/2023

Summary for the 2nd part of the course Introduction to Artificial Intelligence 1 at Tilburg University. The summary covers the knowledge needed for the Endterm, which constitutes for 70% of the final grade. Passed it with 7.0 without studying, just by bringing the notes with me to the exam.

Show more Read less
Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
May 27, 2023
Number of pages
59
Written in
2022/2023
Type
Summary

Subjects

Content preview

Lecture 4: Local Search - Deterministic Problem
Solving
- How can we explore a search if we have limited resources?
- How to quickly find a solution that may not be optimal, but good enough?

Search methods discussed so far:
➔ find a path through the search space,
➔ keep track of previous states.

For many problems:
➔ we only want to know the final configuration,
➔ we don’t need the best possible solution (the path)



Local search algorithms
- look at neighboring states to decide what to do next;
- do not keep track of the states that have been reached;
- don’t care about the fact that there may be a better solution somewhere else.

Local search algorithms operate using a single current node (rather than multiple paths) and
generally move only to neighbors of that node.
1) they use very little memory—usually a constant amount;
2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which
systematic algorithms are unsuitable

Local search algorithms are useful for solving pure optimization problems, in which the aim is to find
the best state according to an objective function.

The state-space landscape has both “location” (defined by the state) and “elevation” (defined by the
value of the heuristic cost function or objective function).
- If elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum;
- if elevation corresponds to an objective function, then the aim is to find the highest peak—a
global maximum.
★ (You can convert from one to the other just by inserting a minus sign.) Local search
algorithms explore this landscape.
➔ A complete local search algorithm always finds a goal if one exists;
➔ an optimal algorithm always finds a global minimum/maximum.




1

,Hill-Climbing (steepest-ascent version)

the hill-climbing search algorithm is simply a
loop that continually moves in the direction of
increasing value—that is, uphill
- sometimes called greedy local search
because it grabs a good neighbor state
without thinking ahead about where to
go next.

Imagine trying to climb a mountain:
- in the fog,
- with amnesia
- and not really a mountain. (the difference is: moving in only one dimension)

Local maxima: a local maximum is a peak that is higher than each of its neighboring states but lower
than the global maximum
Ridges: result in a sequence of local maxima that is very difficult for greedy algorithms to navigate.
Plateaux: a plateau is a flat area of the state-space landscape.
- It can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which
progress is possible.
- A hill-climbing search might get lost on the plateau.

★ Local maximum can also be local minimum, depending on what you are looking for;)
★ If we have two different states to go, we go to a higher one, the steepest

Hill-Climbing

➢ Keeps track of one current state (no backtracking)
➢ Does not look ahead beyond the immediate neighbors of the current state (greedy)
➢ On each iteration moves to the neighboring state with highest value (steepest ascent)
➢ Terminates when a peak is reached (no neighbor has a higher value).

Problems
- Local Maxima (from one local maxima you can't move to another
local maxima/one dimension)
- Plateaus (flat local maximum or shoulder)
- Ridges (see figure)
➔ Sequence of local maxima that are not directly connected)
➔ Each local maximum only has worse connecting states.
➔ Common in low-dimensional state spaces




2

,Improvements
- Allow for a limited number of sideways moves (if on plateau that is really a shoulder- A
shoulder is a plateau that leads to a local maximum), instead of moving one step in one
dimension, allow for more steps
➔ Higher success rate + Higher number of moves
- Stochastic hill climbing: random selection between the uphill moves, with probability related
to steepness.
➔ highest probability to go to the steepest utility
- First-choice hill climbing: random testing of successors until one is found that is better than
the current state. Not inspecting all of states, it’s not really necessary in Hill-Climbing because
it doesn’t have a lot of states
➔ Good strategy when testing all successors is costly
- Random-restart hill climbing: do a number of hill-climbing searches from randomly selected
states. “If at first you don’t succeed, try, try again.”
➔ If each hill-climbing search has probability of success p then solution will be found on
average after 1/p restarts.
➔ Will eventually find the correct solution because the goal state will be the initial state.

Hill-Climbing
● If elevation = objective function → find the global maximum or highest peak → hill
climbing.
● If elevation = cost → find the global minimum or lowest valley → gradient descent.

Simulated Annealing
- Problem with hill climbing: efficient, but will get stuck in a local maximum.
- Problem with random walk: most inefficient, but will eventually find the local maximum.
- Combination of both → simulated annealing (more complete and efficient)

Simulated Annealing yields both efficiency and completeness
- To explain simulated annealing, we switch our point of view from hill climbing to gradient
descent (i.e., minimizing cost) and imagine the task of getting a ping-pong ball into the
deepest crevice in a bumpy surface.
➔ if we just let the ball roll, it will come to rest at a local minimum.
➔ If we shake the surface, we can bounce the ball out of the local minimum.
➔ The trick is to shake just hard enough to bounce the ball out of local minima but not
hard enough to dislodge it from the global minimum.
➔ The simulated-annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e., lower the
temperature).
➢ Move to randomly chosen neighbor state
➔ If utility is higher, always move to that state.
➔ If utility is lower, move to that state with probability p < 1.
➢ Probability of a move to a worse state


3

, - Becomes less likely the worse the move makes the situation
- Becomes less likely as temperature decreases

Local Beam Search
The local beam search algorithm keeps track of k states rather than just one

Local Beam Search
- Selects the k best successors at every step
➔ if k=1 → hill climbing
➔ if k=>2 → parallel hill climbing process

Stochastic local beam search
- Selects k successors at random every step
- Probability of selection is a function of utility (aka fitness)


Genetic Algorithms
A genetic algorithm (or GA) is a variant of stochastic beam search in which successor states are
generated by combining two parent states rather than by modifying a single state

- Starts with k randomly selected states (population)
- Each state (or individual) is encoded as a string (commonly, a string of 0s and 1s)


Colors:
➔ R: FF, G: 00, B:00 - PURE RED
➔ R: II G: II, B: II - GRAY




- Each state is rated by the objective function (aka fitness function)




4
$9.06
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
nataliaputorak Tilburg University
Follow You need to be logged in order to follow users or courses
Sold
13
Member since
2 year
Number of followers
2
Documents
4
Last sold
2 months ago

5.0

1 reviews

5
1
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions