Garantie de satisfaction à 100% Disponible immédiatement après paiement En ligne et en PDF Tu n'es attaché à rien 4.2 TrustPilot
logo-home
Resume

Final Modules Summary Data Mining for Business and Governance (880022-M-6)

Vendu
1
Pages
16
Publié le
04-02-2022
Écrit en
2021/2022

This documents contains a summary of the final modules/weeks (4-7) for the course Data Mining for Business and Governance. The following topics are included in this summary: ⋅ Crisp (K-means) clustering ⋅ Fuzzy (c-means) clustering ⋅ Hierarchical clustering ⋅ Text mining ⋅ Preprocessing noisy text ⋅ Document similarity: Jaccard coefficient ⋅ Term frequency, inverse term frequency ⋅ Dimensionality reduction ⋅ Feature selection ⋅ Filtering strategy ⋅ Wrapper strategy ⋅ Embedded strategy and Lasso regression as an example ⋅ Feature extraction ⋅ Principal Component Analysis (PCA) ⋅ Feature extraction in deep learning ⋅ Association rule learning ⋅ Support/confidence of an itemset ⋅ Apriori algorithm ⋅ Itemset taxonomy ⋅ Mining big datasets ⋅ Ensemble learning – Boosting, Bagging, Random Forests ⋅ Deep learning and neural networks ⋅ Over sampling/under sampling ⋅ Support vector machines ⋅ Naïve bayes ⋅ Information gain

Montrer plus Lire moins
Établissement
Cours










Oups ! Impossible de charger votre document. Réessayez ou contactez le support.

École, étude et sujet

Établissement
Cours
Cours

Infos sur le Document

Publié le
4 février 2022
Nombre de pages
16
Écrit en
2021/2022
Type
Resume

Sujets

Aperçu du contenu

Crisp (K-Means) clustering
Produces independent clusters that might fail to capture
overlapping clusters. Crisp clustering minimizes the sum of
distances between data instances and their respective
cluster centroids. These centroids are randomly initialized
and updated in each iteration. If they don’t update, or
don’t change, the algorithm can stop as it’s not learning a
new pattern. Before the iteration we:
1. Tune K, this defines the number of clusters we want to
obtain.
2. Select a number of random data instances to obtain
random centroids.
3. Assign all data instances to the closest cluster centroid.
4. Recompute the centroids of our newly formed clusters.
This can be done by either aggregating all datapoints in a cluster or selecting the most
representative data instance for each cluster.
5. Repeat 3. And 4. until a stopping criteria is reached.

Stopping criteria:
 Centroids of a newly formed cluster don’t change
 Data instances remain in the same cluster; no new patterns occur
 Maximum number of iterations is reached

Fuzzy (c-means) clustering
Produces clusters where each data instance belongs to a group with a membership degree.
Data instances can belong to more than one cluster. Each instance will be evaluated and
returns a membership degree between 0 and 1. This value
indicates how much this instance belongs to a certain cluster.

We can tune c as the number of clusters we want to obtain. Next
to calculating clusters, fuzzy computes prototypes; weighted
aggregations of instances. These prototypes can be used to
summarize the data.

The objective of fuzzy clustering is to minimize the sum of distances between each data
instance and all clusters.

The stopping criteria are the same as k-means; either the prototypes don’t change or we
reach a maximum number of iterations.

,Hierarchical clustering
Provides a hierarchy of clusters. Doesn’t have tunable parameters.




Useful when we don’t know how many clusters we should obtain to properly represent the
problem under investigation.

Text mining
Representing, mathematically interpreting, inferring knowledge from text. This is very
complex.

Preprocessing noisy text
Just lowercasing and removing punctuation is very naïve.
 Tokenization: looks for whitespaces and special tokens. I’m -> I am.
 Lemmatization: grouping of variances of a word so they can be analyzed as a single item.
Watches, watching -> watch.
Tokenization and lemmatization give more interesting vocabularies without noise.
 Named-entity recognition: find patterns that indicate some token is a person’s name.
 Language normalization: find the meaning of an actual world. Gurl -> Girl.

Document similarity: Jaccard coefficient
Compares members for two sets which members are shared and which members are
distinct:
words∈ A∧B
J ( A , B)=
words∈ A∨B
With 0 indicating no overlap and 1 indicating complete overlap.

For example:
Data Language Learning Mining Text Vision Y
1 0 1 0 0 1 Computer vision
1 1 1 0 1 0 NLP
1 0 1 1 1 0 Text mining

2 words ∈d 0∧d 1 2
J ( d 0 , d 1)= = =0.4
5 words ∈d 0∨d 1 5
So not much similarity between computer vision and NLP.

, 3 words ∈d 1∧d 2 3
J ( d 1 , d 2)= = =0.6
5 words∈d 1∨d 2 5
There is more similarity between NLP and text mining.

Term frequency, inverse term frequency
Term frequency means how often a term occurs in a document. There is a problem with
calculating term frequencies; the longer a document, the higher the probability a term will
occur and thus get more weight.

The inverse term frequencies account for the fact that rarer terms should actually be more
informative:
N
Inverse document frequenc y ( IDF)=log
dft
Where N is the total number of documents and dft the number of documents containing a
certain term.

Term frequency – inverse document frequency, or tf*idf, is a statistic intended to reflect
how important a term is to a document in a collection of documents. Its weighting helps to
adjust for the fact that some words appear more frequently in general. However, we still
don’t account for the fact that longer documents will be weighted more.
For example:
Document 1 Document 2
Term Term Count Term Term count
This 1 This 1
Is 1 Is 1
A 2 Another 2
Sample 1 Example 3

Term = “example”
Tf(example,d1) = 0/5 = 0.
Tf(example,d2) = 3/7 = 0.429.
Idf(example,D) = log(2/1) = 0.301.
Tf*idf(example,d1,D) = 0*0.301 = 0
Tf*idf(example,d2,D) = 0.429*0.301 = 0.129

Feature selection
Feature selection is the process of selecting a subset of relevant features. This subset has
the same predictive power as the original dataset.
Feature selection:
 Reduces complexity of a model
 Reduces demand on hardware sources
 Reduces the “curse of dimensionality”
€3,99
Accéder à l'intégralité du document:

Garantie de satisfaction à 100%
Disponible immédiatement après paiement
En ligne et en PDF
Tu n'es attaché à rien


Document également disponible en groupe

Reviews from verified buyers

Affichage de tous les avis
3 année de cela

3,0

1 revues

5
0
4
0
3
1
2
0
1
0
Avis fiables sur Stuvia

Tous les avis sont réalisés par de vrais utilisateurs de Stuvia après des achats vérifiés.

Faites connaissance avec le vendeur

Seller avatar
Les scores de réputation sont basés sur le nombre de documents qu'un vendeur a vendus contre paiement ainsi que sur les avis qu'il a reçu pour ces documents. Il y a trois niveaux: Bronze, Argent et Or. Plus la réputation est bonne, plus vous pouvez faire confiance sur la qualité du travail des vendeurs.
Socnerd Universiteit Utrecht
S'abonner Vous devez être connecté afin de suivre les étudiants ou les cours
Vendu
71
Membre depuis
8 année
Nombre de followers
58
Documents
17
Dernière vente
3 année de cela

3,4

23 revues

5
2
4
10
3
8
2
1
1
2

Récemment consulté par vous

Pourquoi les étudiants choisissent Stuvia

Créé par d'autres étudiants, vérifié par les avis

Une qualité sur laquelle compter : rédigé par des étudiants qui ont réussi et évalué par d'autres qui ont utilisé ce document.

Le document ne convient pas ? Choisis un autre document

Aucun souci ! Tu peux sélectionner directement un autre document qui correspond mieux à ce que tu cherches.

Paye comme tu veux, apprends aussitôt

Aucun abonnement, aucun engagement. Paye selon tes habitudes par carte de crédit et télécharge ton document PDF instantanément.

Student with book image

“Acheté, téléchargé et réussi. C'est aussi simple que ça.”

Alisha Student

Foire aux questions