CONNECTIONISM
CONNECTIONISM – LECTURE
BASICS
5 ASSUMPTIONS
Neurons integrate information
Neurons: input arrives at neuron, if it reaches threshold output
Neural networks: unit receives input from somewhere projects outputs
somewhere else
Neurons pass on information about their input levels
Rate of output varies depending on strength of input
Brain structure is layered
Brain is hierarchically organised each layer is stage of information processing
At each stage information is transformed to form new representations
Influence of one neuron on another depends on the strength of the connection between them
Weights of connections – how much of input, to input unit affects the next unit
Learning is achieved by changing the strength of connections between neurons
Adaptable weights form central tenet of connectionism
SYMBOLS & ELEMENTARY EQUATIONS
Overall function
Transfer function
Inputi * weightij &
add them up
Activation functions Linear
Determines You get out what you
activity of a put in
neuron
Threshold linear
You get out what you
put in IF you reach a
certain threshold
, Binary
When threshold is
reached, output is 1
Sigmond
Output function Connectionist models: usually linear, just passes on activation of
a neuron
Determines
Biological models: output function ≠ activation function
output neuron
Output = firing rate of a neuron
actually sends
Activation = membrane potential of a neuron
onwards
THE BIAS
Input units – green
Bias unit – yellow
Always 1 & aways active
Connected to all units in the next layer
& exists in every layer
Bias function:
Why?
bj – j making threshold specific threshold for each individual unit can be controlled
threshold becomes trainable & learnable – works same way weights change through
learning
PROPERTIES
Information is stored in a distributed fashion & processed in parallel
All knowledge in a connectionist model is superimposed on the same set of
connections
Properties of connectionist models (example: Hopfield Network)
Damage resistant & fault tolerant
No individual neuron is of crucial importance – distributed information storage
& processing
Graceful degradation – small damage has no noticeable effect, only as
damage increases can you see effect
Content addressable memory
CONNECTIONISM – LECTURE
BASICS
5 ASSUMPTIONS
Neurons integrate information
Neurons: input arrives at neuron, if it reaches threshold output
Neural networks: unit receives input from somewhere projects outputs
somewhere else
Neurons pass on information about their input levels
Rate of output varies depending on strength of input
Brain structure is layered
Brain is hierarchically organised each layer is stage of information processing
At each stage information is transformed to form new representations
Influence of one neuron on another depends on the strength of the connection between them
Weights of connections – how much of input, to input unit affects the next unit
Learning is achieved by changing the strength of connections between neurons
Adaptable weights form central tenet of connectionism
SYMBOLS & ELEMENTARY EQUATIONS
Overall function
Transfer function
Inputi * weightij &
add them up
Activation functions Linear
Determines You get out what you
activity of a put in
neuron
Threshold linear
You get out what you
put in IF you reach a
certain threshold
, Binary
When threshold is
reached, output is 1
Sigmond
Output function Connectionist models: usually linear, just passes on activation of
a neuron
Determines
Biological models: output function ≠ activation function
output neuron
Output = firing rate of a neuron
actually sends
Activation = membrane potential of a neuron
onwards
THE BIAS
Input units – green
Bias unit – yellow
Always 1 & aways active
Connected to all units in the next layer
& exists in every layer
Bias function:
Why?
bj – j making threshold specific threshold for each individual unit can be controlled
threshold becomes trainable & learnable – works same way weights change through
learning
PROPERTIES
Information is stored in a distributed fashion & processed in parallel
All knowledge in a connectionist model is superimposed on the same set of
connections
Properties of connectionist models (example: Hopfield Network)
Damage resistant & fault tolerant
No individual neuron is of crucial importance – distributed information storage
& processing
Graceful degradation – small damage has no noticeable effect, only as
damage increases can you see effect
Content addressable memory