100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Class notes

Collegeaantekeningen leren en geheugen (5102LEGE9Y) Week 7

Rating
-
Sold
-
Pages
43
Uploaded on
08-07-2023
Written in
2021/2022

Een uitgebreide en overzichtelijke samenvatting van de hoorcolleges uit week 7 voor deeltentamen 2 van het vak Leren en Geheugen van de studie Psychobiologie aan de UvA ().

Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Connected book

Written for

Institution
Study
Course

Document information

Uploaded on
July 8, 2023
Number of pages
43
Written in
2021/2022
Type
Class notes
Professor(s)
Lansink
Contains
De hoorcolleges uit week 7

Subjects

Content preview

HC aantekeningen Week 7
Quizlet link: https://quizlet.com/Annabel2703/folders/leren-en-geheugen/sets



Inhoud
College 17: Computational models of learning and memory (part II) .............................................................................. 2
College 18: De ziekte van Alzheimer en dementie ......................................................................................................... 15
College 19: Geheugen consolidatie & reorganisatie ....................................................................................................... 32

,College 17: Computational models of learning and memory (part II)
➢ Synapsen belangrijk voor leren, dus dat moet je goed kunnen simuleren
❖ Recap:
➢ basic ideen voor modelling, hoe
kunnen we individuele neuronen
modelleren op verschillende
niveaus?
➢ We covered 3 models, 3 levels:
Hodgkin-Hodgkin-Huxley (most
biophysical, currents), Izhikevich
(little more simplified), Leaky
integrate-and-fire model (most
simple but useful for basic
features, used during workshop).
❖ How are models of learning implemented
in these neural network models?

Models of learning and memory

❖ Now that we can simulate a neuron, we can start looking at learning rules
❖ We can classify the learning strategies of the brain in three main categories, which can also be used in
artificial approaches of learning.
➢ Unsupervised learning: We can classify the learning strategies of the brain in three main
categories,
▪ input from outside world -> activity between input and layer x is the main thing that
changes the output?
▪ nothing that oversees how the network learns and performs, free, unsupervised





➢ Supervised learning: the neural network receives input from the outside world and also the
desired output, so that the network can change its synaptic weights to reach such output.
▪ input en output vergelijken -> weights aanpassen





➢ Reinforcement learning: the neural network receives input from the outside world, and a
reward/punishment teaching signal which bias the learning towards a desired output.
▪ extension van supervised learning, teaching signal. Het signaal is niet de desired outpu
die we willen, niet een erg specifiek signaal maar een beloning/straf.

, ▪
▪ (Ik vraag me af: krijg je dus als netwerk alleen te horen ‘je deed het verkeerd’ of krijg je ook een richting
aangegeven zoals ‘je uitkomst was te laag’, of is dat meer supervised learning?)
❖ Biological examples:
➢ Unsupervised learning: for example, receptive fields.
➢ Supervised learning: links with biological mechanisms still unclear. A good candidate is
learning in the cerebellum (teaching signals).
➢ Reinforcement learning: classical conditioning.




Unsupervised learning

❖ Unsupervised learning is a learning process in which synaptic weights change as a function of the
inputs (and internal activity) only.
❖ Simplicity and plasticity -> good for experiments, computational pov
❖ It is therefore easy to map this process to the learning of biological neural systems and changes in
biological synapses.
❖ The first biological principle of synaptic changes associated with learning is the Hebb’s principle:
“Neurons that fire together, wire together.
❖ “Neurons that fire together, wire together”- Donald Hebb




➢ ->




➢ -> ->




➢ ->




➢ ->
➢ Synaptisch weight increases

, ➢ “WHEEL” reminds us of the car
➢ Activation of neuron -> activation of neuron connected by stronger synapse
❖ This principle allows to recover neural activity patterns, or neural assemblies, from incomplete or
noisy data, leading to the concept of associative memory.
➢ This happens without any kind of supervision from external agents.





❖ We can consider a variety of learning rules to train neural networks in an unsupervised way.
➢ Some of these rules come from biology (i.e. refined versions of the Hebb rule, or other different
rules also found in synapses).
➢ Other rules can be considered on the basis of their theoretical and computational properties
(such as stability, simplicity or fast training times).
➢ We will cover several classical learning rules used in unsupervised learning
❖ thepricial principes for guidelines
❖ The BCM rule:
➢ formulated by Elie Bienenstock, Leon Cooper and Paul Munro in 1982. It attempts to explain
learning in the visual system.
➢ This rule is an extension of the Hebb rule (but for continuous values) which solves two
important aspects of the stability problem of the Hebb rule. (Hebb would make the synapses
either stronger and stronger, or weaker and weaker, but we want something more stable)
➢ More precisely, the BCM rule adds

▪ (i) a leaky term to incorporate depression and make unused synapses weaker,
▪ and (ii) a sliding threshold to balance potentiation with depression and prevent

runaway increase of synaptic weights
➢ Equation:





▪ temporal evolution of wij: synaptic weight between neurons i
and j
▪ with φ(x) is the sigmoidal function, which imposes a cap in the increase of the synaptic
weight.
▪ This function introduces a sliding threshold (𝜃!), which provides the stability factor
missing in the standard Hebb rule.
▪ The leaky term provides a long-term depression mechanism
$6.60
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
Annabel2703 Universiteit van Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
14
Member since
4 year
Number of followers
6
Documents
20
Last sold
6 months ago

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions