100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary processing advanced data analysis

Rating
-
Sold
-
Pages
6
Uploaded on
08-06-2024
Written in
2023/2024

Summary of the powerpoint of processing.

Institution
Course









Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
June 8, 2024
Number of pages
6
Written in
2023/2024
Type
Summary

Subjects

Content preview

Lesson 2: processing principles
Unstructured data
-> data has no pre-defined structure
-> often test-heavy
-> many irregularities

Common data processing steps in data mining
1. Feature extraction: convert the heterogenous data into
numerical features.
-> capture the feature where we are most interested in
-> feature = a question where the response is something that the
computer understands

2. Attribute transformation : alters the data by replacing a selected
attribute by one or more new attributes (functionally dependent on
the original one, to facilitate further analysis)

3. Discretization: continuous variables  discrete/ nominal
attributes/features (BMI -> overweight, obese, not obese)

4. Aggregation: combine 2/more attributes in a single one
-> data reduction, change of scale, more stable data (aggregated
data have less variability)

5. Noise removal: remove random fluctuations in data that hinder the
perception of the true signal

6. Outlier removal: outliers are objects with characteristics that are
considerably different than most of the other objects in the set

7. Sampling: because obtaining/processing the entire set of data of
interest is often too expensive/time consuming
-> sample needs to be representative and contain the same
properties
-> simple random sampling: equal probability of selecting any
particular item
 Sampling with replacement (reuse of an item): objects are not
removed from the population when they are selected for the
sample
-> stratified sampling: split the data into several partitions & then
draw random samples from each partition

8. Handling duplicate data
-> data cleaning
-> for example: same person with multiple email addresses

9. Handling missing values
-> NA

, -> cause: info is not collected, errors are made during an
experiment, attributes may not be applicable to all cases
-> MCAR (missing complete at random): certain values missing
but the fact that they are missing is not related to the features of the
individual (missing a page while filling in a survey)
-> MAR: dataset might be missing but the fact that it is missing is
not random
(Related to the observed data but not to the unobserved data ->
males are less likely to fill in a depression survey, they are missing
because they are male not because they are depressed OR in a
medical study, suppose younger participants are less likely to report
their weight. The missingness of weight data depends on the age of
the participants, which is observed.
-> MNAR: the value of the variable that is missing is related to the
reason why it is missing (-> related to unobserved data: for
example: no income -> related with the missingness because you
just have no income)

How to handle? Ignore the missing value, eliminate data objects,
estimate the missing value

10. Dimensionality reduction: curse of dimensionality =
when dimensionality increases, data becomes increasingly sparse in
the space that it occupies. The higher the dimensionality, the less
meaningful the concept of distance becomes. This makes it hard to
find patterns.
-> sparse matrices are those matrices that have most of their
elements equal to zero. In other words, the sparse matrix can be
defined as the matrix that has a greater number of zero elements
than the non-zero elements.




Purpose:
-> avoid curse of dimensionality
-> reduce amount of time and memory needed by data mining
algorithm
-> allow data to be more easily visualized
-> help to eliminate irrelevant features or reduce noise

Techniques of dimensionality reduction

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
AVL2 Universiteit Antwerpen
Follow You need to be logged in order to follow users or courses
Sold
90
Member since
4 year
Number of followers
49
Documents
90
Last sold
1 month ago

4.3

4 reviews

5
2
4
1
3
1
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions