100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten 4,6 TrustPilot
logo-home
Samenvatting

Summary Alle lectures samengevat

Beoordeling
-
Verkocht
7
Pagina's
27
Geüpload op
10-10-2023
Geschreven in
2020/2021

Ik heb dit tentamen, aan de hand van deze samenvatting, gehaald met een 9. Succes!

Voorbeeld van de inhoud

Lecture 1: Introduction to Computation Intelligence
Intelligence: the ability to acquire and apply knowledge and problem solving

Computations: the action of mathematical calculations. The use of computers

Knowledge: experience(=data) represented by facts and information

Computational intelligence: computers acquire knowledge and solve problems

Artificial intelligence:

- Symbolic AI: logic, knowledge representations,…
- Sub-symbolic AI: neural nets, evolutionary algorithms, nature-inspired algorithms
- Statistical Learning
- Probabilistic methods
- Optimization

Computational intelligence (can be seen as a subfield of AI):

- Neural Networks
- Evolutionary algorithms
- Nature-inspired algorithms (swarm intelligence)
- Probabilistic methods
- Optimization

Most important part is that optimization is in common.

Why is AI so successful? Accessible hardware (in the past not possible), powerful hardware,
intuitive programming language (python), specialized packages (PyTorch, Numpy, SciPy..).

The components of AI/CI systems: Knowledge representation (how to represent and process data?),
knowledge acquisition (how to extract knowledge?), and learning problems (what kind of problems
can we formulate).

Optimization: find the optimal solution (minimum or maximum) from a given set of possible
solutions that minimizes/maximizes given objective function.

Learning as optimization task: For given data D, find the best data representation from a given class
of representations that minimizes given learning objective (loss). Optimization algorithm = learning
algorithm!

Learning tasks:

- Supervised learning
o We distinguish inputs and outputs
o We are interested in prediction
o We minimize predictions error
- Unsupervised learning
o No distinction among variables
o We look for data structure
o We minimize reconstruction error, compression rate,..
- Reinforcement learning
o An agent interacts with an environment

, o We want to learn a policy
o Each action is rewarded
o We maximize the total reward

Examples of supervised learning: Classification, Regression

Examples of unsupervised learning: Clustering, Reconstruct a function

Examples of reinforcement learning: How to control drones, Pac-Man

Lecture 2: Optimization
Example of an optimization problem is the travelling salesman problem: minimize total distance
such that each place is visited once. Another optimization problem is the coverage problem: we
know the range of each tool, how to place them such that it covers most of the locations (coverage
is as big as possible). Another optimization problem is curve fitting: fitting a curve through
datapoints. A different optimization problem is radiation optimization: how to maximizes radiation
through an antenna. Another famous problem is the knapsack problem: we have a bag (that can
handle certain volume) and objects, find combination. Recap: Find the optimal solution (minimum
or maximum) from a given set of possible solutions that minimizes/maximizes given objective


function:




Continuous optimization is easier than discrete. 3) Optimal solution: the optimal solution is smaller
than any other solution AND the constraints hold. Constraints are very important, because they can
shift the optimal solution to another, local optimum.

Taxonomy of optimization problems:

- Constrained vs unconstrained: constrained problems are the ones that have explicit
constraints. If no constraints: basically everything is allowed
- Convex vs non-convex: convex functions are for example exponential functions. If you follow
the curve: always end up in the optimal solution. Convex sets: if you make a step in any
direction of the search space, you will always end up in this set (Circle is a convex set, circle
with a hole in it: non-convex)
- Deterministic vs stochastic: deterministic is that if the objective function and the constraints
are known, nothing can change. Stochastic: randomness is involved (e.g. objective function
is a random function)
- Global vs local: sometimes having a local optimum is already fine. Global optimum is the
best and local optimum is the best in its neighborhood
- Continuous vs discrete: discrete optimization (combinatorial) number of possible solutions is
exponential (requires considering many different combinations).

, Before tackling any problem, formulate the problem as good as possible (do not think about the
solution/method yet). The method should be adjusted to the problem.

Taxonomy of optimization methods

- Derivative-free methods (0th order methods): you do not need to know the mathematical
form of the objective function. Just “look” for candidate solutions and try to make them
better. E.g. Hill climbing, iterated local search….
- Gradient-based methods (1st order methods): more specialized, because the information
about the gradient of the objective function is used. A gradient is a vector of first derivatives.
E.g. Gradient descent, ADAM,…
- Hessian-based methods (2nd order methods): require calculating Hessian. Hessian is a matrix
of second derivatives. E.g. Newton’s method,… Hessian-based methods are expensive to
compute. If you have a small problem, you should start here.

Iterative optimization methods: we are interested in numerical methods. Therefore, we consider
iterative optimization methods. You start with a random, possible model, in the model space, and
from there look for better (iterate) models. The general procedure:




In 2: look at current point, and check if next solution is better or not. If not, go back and look for
other (better) points. Crucial part is the formulation of ψ. This is the algorithm step.

Gradient descent: derivatives of the objective function with respect to optimization variables =




gradient: If access to objective function (analytically): always use gradient
descent. Intuition behind the gradient

Documentinformatie

Geüpload op
10 oktober 2023
Aantal pagina's
27
Geschreven in
2020/2021
Type
SAMENVATTING

Onderwerpen

Maak kennis met de verkoper

Seller avatar
De reputatie van een verkoper is gebaseerd op het aantal documenten dat iemand tegen betaling verkocht heeft en de beoordelingen die voor die items ontvangen zijn. Er zijn drie niveau’s te onderscheiden: brons, zilver en goud. Hoe beter de reputatie, hoe meer de kwaliteit van zijn of haar werk te vertrouwen is.
gideonrouwendaal Universiteit van Amsterdam
Bekijk profiel
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
50
Lid sinds
3 jaar
Aantal volgers
22
Documenten
17
Laatst verkocht
2 maanden geleden

5,0

1 beoordelingen

5
1
4
0
3
0
2
0
1
0

Populaire documenten

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Veelgestelde vragen