Verified Answers | 100% Correct| Latest i,- i,- i,- i,- i,- i,-
2025/2026 Update - Georgia Institute of i,- i,- i,- i,- i,- i,-
Technology.
Weight sharing The weights will represent what types of
i,- i,-i,- i,- i,- i,- i,- i,- i,- i,- i,-
features we will extract. The weights (W) will be the same for each
i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
output node with respect to a specific kernel, regardless of the
i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
specific image patch we are looking at. i,- i,- i,- i,- i,- i,- i,-
The total number of input parameters:
i,- i,- i,- i,- i,- i,-
K1 x K2 + 1
i,- i,- i,- i,-
Input parameters with multiple feature extractions
i,- i,- i,- i,- i,- i,-i,- i,- (K1 x K2 + i,- i,- i,- i,-
1) x M
i,- i,-
where M is the number of features
i,- i,- i,- i,- i,- i,-
Relationship between convolution and cross-correlation i,- i,- i,- i,- i,-i,- i,-
Duality: If cross-correlation is the forward pass (which is the easier
i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
operation), the convolution operation is going to be the i,- i,- i,- i,- i,- i,- i,- i,- i,-
backward pass to calculate gradients (vice versa) i,- i,- i,- i,- i,- i,-
, Valid convolution
i,- i,-i,- i,- When the kernel is fully on the image. (No
i,- i,- i,- i,- i,- i,- i,- i,- i,-
padding)
Output size of the vanilla convolution,
i,- i,- i,- i,- i,- i,-
given H, W, K1, K2i,- i,- i,- i,- i,-i,- i,- (H - K1 + 1) x (W - K2 + 1)
i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
How to add padding
i,- Increases the size of the image with P in
i,- i,- i,-i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
both directions (top & bottom, left & right)
i,- i,- i,- i,- i,- i,- i,- i,-
--> (H + 2P) x (W + 2P)
i,- i,- i,- i,- i,- i,- i,-
Can be filled with zeros or mirror the image
i,- i,- i,- i,- i,- i,- i,- i,-
Convolution Features i,- i,-i,- i,- edges
colors
textures
motifs (corners, shapes)i,- i,-
Receptive field A region of an image (image patch) from
i,- i,-i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
which the node receives input. Usually denoted by a K1 x K2
i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,- i,-
matrix.
Convolution vs Cross-correlation Convolution: flip the kernel
i,- i,- i,-i,- i,- i,- i,- i,- i,-
(rotate 180) and take the dot product with image patch
i,- i,- i,- i,- i,- i,- i,- i,- i,-