Garantie de satisfaction à 100% Disponible immédiatement après paiement En ligne et en PDF Tu n'es attaché à rien 4,6 TrustPilot
logo-home
Notes de cours

Complete WEEK6 note: Machine Learning & Learning Algorithms(BM05BAM)

Note
-
Vendu
-
Pages
14
Publié le
12-03-2024
Écrit en
2023/2024

THIS IS A COMPLETE NOTE FROM ALL BOOKS + LECTURE! Save your time for internships, other courses by studying over this note! Are you a 1st/2nd year of Business Analytics Management student at RSM, who want to survive the block 2 Machine Learning module? Are you overwhelmed with 30 pages of reading every week with brand-new terms and formulas? If you are lost in where to start or if you are struggling to keep up due to the other courses, or if you are just willing to learn about Machine Learning, I got you covered. I successfully passed the course Machine Learning & Learning Algorithms at RSM with 7.6, WITHOUT A TECHNICAL BACKGROUND before this Master. So if you are from Non-tech bachelor program, this note will navigate the knowledge you should focus on to pass the exam and successfully complete assignments, and for people with some machine learning knowledge, this note will certainly make your life easier and gets you a booster to your grade.

Montrer plus Lire moins
Établissement
Cours









Oups ! Impossible de charger votre document. Réessayez ou contactez le support.

Livre connecté

École, étude et sujet

Établissement
Cours
Cours

Infos sur le Document

Publié le
12 mars 2024
Nombre de pages
14
Écrit en
2023/2024
Type
Notes de cours
Professeur(s)
Jason roos
Contient
Toutes les classes

Sujets

Aperçu du contenu

Chapter 10 : Deep learning
Neural networks takes an input vector of p variables : X = (X1,X2,…Xp) and builds a nonlinear
function f(X) to predict the response Y.

Neural networks derive new features by computing different combinations of X, and then
squashes each through an activation function to transform it into the final model as a linear
model in the derived variables.

Neural Network uses feed-forward neural network for modeling a response with p predictors.

Specializations of deep learnings
1. Convolutional Neural Networks: for image classification
2. Recurrent Neural Networks: for time series and other sequences

Advantage
It can fit to almost any data! Very low bias

Typically, neural network will be an attractive choice when the sample size of the training set is
extremely large and when interpretability of the model is not a high priority

Disadvantage
Linear models are much easier to interpret than the neural network
When faced with several methods that give roughly equivalent performance, pick the simplest

10.1 : Single Layer Neural Networks




Structure
- Input layer: consists of the units of features as input units
- Hidden layer: consists of K Hidden units/Activations :
o each of the inputs from the input layer feeds into it. We pick the number of K
hidden units)
o It works as a different transformation of the original features
- Weights : represented as w

, - Bias: represented as B. added to the weighted input unit as an input to the activation
layer

Activation functions
The nonlinearity in the activation function is essential to allow the model to capture complex
nonlinearities and interaction effects of the data.

Sigmoid function : same function used in logistic regression to convert a linear function into
probabilities between 0 and 1.




ReLU (Rectified linear unit) function: function with a threshold 0 that only counts f(x) that is
higher than threshold
- More efficient than sigmoid because of the efficiency in computation and storage: more
preferred function in the modern neural network



the constant term wk0 in the linear function will shift the threshold


Process: Prediction
1. Input observations x + Bias will be passed from input layer to hidden layer with hidden
units/Activation function
a. the observations will be multiplied by the weights for each observation and
passed with Bias to each activation unit
2. The multiplied results of input units and weights will get nonlinear transformation by
the activation function
3. The result will be multiplied by the Beta coefficients and passed to the output layer as
the linear combination of nonlinear transformed linear combination of x, with an
intercept.

Process: Estimating parameters
In neural network, Weights, Bias, Beta coefficients and intercept have to be estimated from
data. Loss functions are used.

Quantitative objective: minimize Squared-error loss



Qualitative objective: minimize the negative multinomial log-likelihood/cross-entropy
€15,49
Accéder à l'intégralité du document:

Garantie de satisfaction à 100%
Disponible immédiatement après paiement
En ligne et en PDF
Tu n'es attaché à rien


Document également disponible en groupe

Faites connaissance avec le vendeur

Seller avatar
Les scores de réputation sont basés sur le nombre de documents qu'un vendeur a vendus contre paiement ainsi que sur les avis qu'il a reçu pour ces documents. Il y a trois niveaux: Bronze, Argent et Or. Plus la réputation est bonne, plus vous pouvez faire confiance sur la qualité du travail des vendeurs.
ArisMaya Erasmus Universiteit Rotterdam
S'abonner Vous devez être connecté afin de suivre les étudiants ou les cours
Vendu
49
Membre depuis
4 année
Nombre de followers
30
Documents
20
Dernière vente
3 mois de cela
Let's Pass Together!

4,0

1 revues

5
0
4
1
3
0
2
0
1
0

Récemment consulté par vous

Pourquoi les étudiants choisissent Stuvia

Créé par d'autres étudiants, vérifié par les avis

Une qualité sur laquelle compter : rédigé par des étudiants qui ont réussi et évalué par d'autres qui ont utilisé ce document.

Le document ne convient pas ? Choisis un autre document

Aucun souci ! Tu peux sélectionner directement un autre document qui correspond mieux à ce que tu cherches.

Paye comme tu veux, apprends aussitôt

Aucun abonnement, aucun engagement. Paye selon tes habitudes par carte de crédit et télécharge ton document PDF instantanément.

Student with book image

“Acheté, téléchargé et réussi. C'est aussi simple que ça.”

Alisha Student

Foire aux questions