决策树-R

作者: 灵妍 | 来源:发表于2018-03-19 21:49 被阅读7次
    1、特征缩放

    特征缩放是为了图形可视化的方便,决策树没有用到欧式距离不需要特征缩放

    2、代码
    # Decision Tree Classification
    
    # Importing the dataset
    dataset = read.csv('Social_Network_Ads.csv')
    dataset = dataset[3:5]
    
    # Encoding the target feature as factor
    dataset$Purchased = factor(dataset$Purchased, levels = c(0, 1))
    
    # Splitting the dataset into the Training set and Test set
    # install.packages('caTools')
    library(caTools)
    set.seed(123)
    split = sample.split(dataset$Purchased, SplitRatio = 0.75)
    training_set = subset(dataset, split == TRUE)
    test_set = subset(dataset, split == FALSE)
    
    # Feature Scaling
    training_set[-3] = scale(training_set[-3])
    test_set[-3] = scale(test_set[-3])
    
    # Fitting Decision Tree to the Training set
    # install.packages('rpart')
    library(rpart)
    classifier = rpart(formula = Purchased ~ .,
                       data = training_set)
    
    # Predicting the Test set results
    y_pred = predict(classifier, newdata = test_set[-3], type = 'class')
    
    # Making the Confusion Matrix
    cm = table(test_set[, 3], y_pred)
    
    # Visualising the Training set results
    library(ElemStatLearn)
    set = training_set
    X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.0075)
    X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.0075)
    grid_set = expand.grid(X1, X2)
    colnames(grid_set) = c('Age', 'EstimatedSalary')
    y_grid = predict(classifier, newdata = grid_set, type = 'class')
    plot(set[, -3],
         main = 'Decision Tree (Training set)',
         xlab = 'Age', ylab = 'Estimated Salary',
         xlim = range(X1), ylim = range(X2))
    contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
    points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato'))
    points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3'))
    
    # Visualising the Test set results
    library(ElemStatLearn)
    set = test_set
    X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.0075)
    X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.0075)
    grid_set = expand.grid(X1, X2)
    colnames(grid_set) = c('Age', 'EstimatedSalary')
    y_grid = predict(classifier, newdata = grid_set, type = 'class')
    plot(set[, -3], main = 'Decision Tree (Test set)',
         xlab = 'Age', ylab = 'Estimated Salary',
         xlim = range(X1), ylim = range(X2))
    contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
    points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato'))
    points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3'))
    
    # Plotting the decision tree
    plot(classifier)
    text(classifier)
    

    关键代码:
    library(rpart)
    classifier = rpart(formula = Purchased ~ .,
    data = training_set)

    Predicting the Test set results

    y_pred = predict(classifier, newdata = test_set[-3], type = 'class')
    类型属性可以将概率转化为分类因子
    运行结果:


    混淆矩阵.PNG 训练集.PNG 测试集.PNG 加type='class'.PNG 不加type='class'.PNG
    决策树.PNG
    3、与python中的决策树算法不同

    这里的决策树算法是经过trim的,得出来的结果可以看出剔除了噪音点(异常数据,也就是离代表平均水平的,聚集的数据点比较远的数据),并且正确预测了年龄偏大收入低,以及年龄偏低收入高的人群的收入,解决了线性分类器无法解决的问题。
    并且它的预测结果是概率,就是更可能的分类可能,我们需要给预测函数添加属性,才能将结果显示为分类因子。

    3、还原决策树

    画决策树本身是不需要特征缩放的,我们省去特征缩放这一步。
    记住清除变量、图形还有CTRL+L清除控制窗口

    相关文章

      网友评论

        本文标题:决策树-R

        本文链接:https://www.haomeiwen.com/subject/fwvpqftx.html