2025-01-13
knitr::opts_chunk$set(echo = TRUE)
# Load required package
library(kernlab)
Assignment #1 Due January 16
##Question 2.1
Describe a situation or problem from your job, everyday life, cur-
rent events, etc., for which a classification model would be appro-
priate. List some (up to 5) predictors that you might use.
My job is a Senior Financial Analyst at University of Pennsylvania’s Division of Finance. One example
would be the school wanting to predict which students are at risk of defaulting on their tuition payment
plans. A classification model could help the university proactively identify at-risk students and intervene
with financial counseling or alternative payment options.
Predictors: 1. Family Income Level: The reported family income from the student’s financial aid application.
2. Payment History: Past behavior in making tuition payments on time (e.g., consistent, delayed, or missed
payments). 3. Financial Aid Received: The percentage of tuition covered by scholarships, grants, and loans
versus out-of-pocket payment. 4. Enrollment Status: Whether the student is enrolled full-time, part-time,
or has changed enrollment status mid-semester. 5. Extracurricular Commitments: The number of hours a
student spends on work-study jobs or other paid activities, which might indicate financial strain.
This model could enhance financial aid services by allowing targeted support for students most likely to face
financial difficulties, improving retention rates and student satisfaction.
##Question 2.2
1. Using the support vector machine function ksvm contained in
the R package kernlab, find a good classifier for this data. Show
the equation of your classifier, and how well it classifies the data
points in the full data set. (Don’t worry about test/validation data
yet; we’ll cover that topic soon.)
The optimal C value based on the for-loop is C =10. Using C=10, model yielded a reasonable proportion of
“Yes” at 53.7%. Accuracy is relatively good at 86.4%.
1
, The equation of this model is as follows: -0.0009033671*V1 – 0.0007891036*V2 – 0.0016972133*V3
+ 0.0026113628*V4 + 1.0050221406*V5 – 0.0028363016*V6 -0.0001569285*V7 - 0.0003925964*V8 –
0.0012784443*V9 + 0.1064387167*V10 + 0.08157559 = 0
V5 and V10 have the most significant contributions to the model.
# Load the data
data <- read.table("credit_card_data.txt", header = FALSE)
# Initialize vectors to store accuracy for each C
C_values <- c(10ˆ-20, 10ˆ-10, 10, 100, 1000, 10000, 100000, 1000000, 10000000)
accuracy <- 0
best_accuracy <- 0.0
best_C <- NA
# Loop over different values of C
for(i in seq_along(C_values)) {
C <- C_values[i]
# Train model using ksvm with current C value
model <- ksvm(as.matrix(data[,1:10]),
as.factor(data[,11]),
type="C-svc",
kernel="vanilladot", #Vanilladot is a simple linear kernel
C = C, scaled=TRUE)
# see what the model predicts
pred <- predict(model,data[,1:10])
# Calculate accuracy
accuracy <- sum(pred == data[,11]) / nrow(data)
if (accuracy > best_accuracy) {
best_accuracy = accuracy
best_C = C
# calculate a1...am for C=10
a <- colSums(model@xmatrix[[1]] * model@coef[[1]])
# calculate a0
a0 <- -model@b
# Proportion of data predicted as "Yes"
prop <- sum(pred == 1)/nrow(data) #53.67%
}
}
## Setting default kernel parameters
## Setting default kernel parameters
## Setting default kernel parameters
## Setting default kernel parameters
## Setting default kernel parameters
## Setting default kernel parameters
2