100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Class notes

College notes Deep Learning deel 1 (XM_0083), Master VU AI

Rating
3.0
(1)
Sold
-
Pages
98
Uploaded on
23-02-2022
Written in
2021/2022

Those are the notes I made during the whole course, it is not a summary! As long as you understand this document you do not need to watch any lecture. With those notes I have passed the course with a 7.5.

Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
February 23, 2022
File latest updated on
February 23, 2022
Number of pages
98
Written in
2021/2022
Type
Class notes
Professor(s)
Jakub tomczak
Contains
1-8

Subjects

Content preview


Deep Learning
Created @October 30, 2021 4:35 PM

Class S2

Type

Materials




Lecture 1
Introduction
Foundations of Deep Learning

Probabilistic Learning

e.g. Toss a coin




Bernoulli distribution




Deep Learning 1

, likelihood function in this case: product, because independent and identical from same coin, one probability




Objective function: .. all identically distributed so all follow Bernoulli




finding best x:




formulating it as general approach, probabilistic learning (likelihood-based) —> always done like this
p(y|x) is probability distribution



Deep Learning 2

, probabilistic values bernoulli can be used, real or continues values can be done with gaussian or
others

identically and independently —> iid, else if sequential you are dependent of past so a method with
sequential (generative regressive models, rnn)

Logistic Regression

= model that allows us to classify objects
linear dependency: when the result of coin depends on the hand velocity while coining, or the mail calling
spam depends on the words. Its probabilistic so we need sigmoid




Deep Learning 3

, gradient = vector of derivatives




note for image: derivative with respect to theta applying the chain rule

we get the difference between our prediction (sigmoid, y?) and the label x which comes into the gradient. If
our model is correct, the weights θ will not change, else we will follow the direction of x (-1)

next image not important to know in very detail:




Deep Learning 4
$5.27
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Reviews from verified buyers

Showing all reviews
2 year ago

3.0

1 reviews

5
0
4
0
3
1
2
0
1
0
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
MeldaMalkoc Vrije Universiteit Amsterdam
Follow You need to be logged in order to follow users or courses
Sold
54
Member since
3 year
Number of followers
34
Documents
20
Last sold
5 months ago

3.3

7 reviews

5
2
4
1
3
2
2
1
1
1

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions