QUESTIONS & SOLUTIONS(RATED
A+)
Association Mining> Apriori Algorithm>Properties - ANSWERProperty 1:
AB->C, AC->B, B->AC
Must have the same support
Property 2:
Support(A)>=Support(AB)
i.e. Ignore any itemsets with milk because support will be at most 1.
Apriori-you dont want independence - ANSWERUnlike Naive Bayes (where you assume
independence) you don't want independence because that would lead to bad rules.
Make the life of an algorithm extremely difficult so that it... - ANSWERDoesn't overfit to
your dataset
Point of PREDICTIVE DATA MINING - ANSWERBuild a model that is generalizable
Point of DESCRIPTIVE DATA MINING - ANSWERTo describe your data
K-Fold Cross Validation - ANSWERdivide dataset into k subsets and repeat test k
times.
One of the k subsets is used as a test and the k-1 subsets are used as the training
data.And the average error across all k trials is computed.
Formula in notes:
sum from i=1 to k, the error for each iteration i, All divided by k.
Special Case of K-Fold--> Leave One Out Cross Validation - ANSWERIs the extreme
end of KFOLD. Its where K (number of subsets)=N(number of data points in the
dataset)
Curse of Dimensionality - ANSWERDimensionality increases, the volume of the space
increases so fast that the available data become sparse. This sparsity is problematic for
any method that requires statistical significance. In order to obtain a statistically sound
and reliable result, the amount of data needed to support the result often grows
exponentially with the dimensionality.
Overfitting - ANSWER1. If you overfit trained data, model will represent noise instead of
underlying relationship
, 2. Inability to generalize well on unseen data (new data!)
3. Possibility of overfitting exists because criterion for training model is not the same as
criterion used to judge the efficacy of a model.
...that's why we have cross validation!!
How to avoid overfitting with LSLR (Least Square Linear Regression) algorithm: -
ANSWER1. Add a term to describe residual of terms
2. Variable Selection
3. Build model and remove variables that don't have good predictiveness
Entropy Based Decision Tree - ANSWERUnless you have a stopping condition,
overfitting is extremely likely
1-R - ANSWERAvoids overfitting too much, because its based on one variable
Decision Tree Pruning - ANSWERStop when weighted average decreases a little
Naive Bayes
[ all naive Bayes classifiers assume that the value of a particular feature is independent
of the value of any other feature, given the class variable] - ANSWER-make sure dont
have non independent features (because of assumption, not likely to overfit)
-exponential number of rules due to combinations of items
Prism - ANSWERLikely to overfit as long as true class has increase in probability
Descriptive Data Mining-->Association - ANSWERTransaction ID List of Items
Purchased
Itemset in { }
A-->B is an association rule stating that B tends to co-occur with A. {A}-->{B}
Support(A-->B)
|AB| = The number of records containing both items A and B
-the more data the more support! should be relatively high!
-lower support then get more rules
Confidence(A-->B) as |AB|/|A|
-higher confidence so that association is strong
-but lowering confidence gives more rules
Business/Insights/Potential - ANSWER1. If (A-->B) then put products A and B on the
same shelf