
1.准备数据
[plain] view plain copy
> install.packages("tree")
> library(tree)
> library(ISLR)
> attach(Carseats)
> High=ifelse(Sales<=8,"No","Yes") //set high values by sales data to calssify
> Carseats=data.frame(Carseats,High) //include the high data into the data source
> fix(Carseats)
2.生成决策树
[plain] view plain copy
> tree.carseats=tree(High~.-Sales,Carseats)
> summary(tree.carseats)
[plain] view plain copy
//output training error is 9%
Classification tree:
tree(formula = High ~ . - Sales, data = Carseats)
Variables actually used in tree construction:
[1] "ShelveLoc" "Price" "Income" "CompPrice" "Population"
[6] "Advertising" "Age" "US"
Number of terminal nodes: 27
Residual mean deviance: 0.4575 = 170.7 / 373
Misclassification error rate: 0.09 = 36 / 400
3. 显示决策树
[plain] view plain copy
> plot(tree . carseats )
> text(tree .carseats ,pretty =0)
4.Test Error
[plain] view plain copy
//prepare train data and test data
//We begin by using the sample() function to split the set of observations sample() into two halves, by selecting a random subset of 200 observations out of the original 400 observations.
> set . seed (1)
> train=sample(1:nrow(Carseats),200)
> Carseats.test=Carseats[-train,]
> High.test=High[-train]
//get the tree model with train data
> tree. carseats =tree (High~.-Sales , Carseats , subset =train )
//get the test error with tree model, train data and predict method
//predict is a generic function for predictions from the results of various model fitting functions.
> tree.pred = predict ( tree.carseats , Carseats .test ,type =" class ")
> table ( tree.pred ,High. test)
High. test
tree. pred No Yes
No 86 27
Yes 30 57
> (86+57) /200
[1] 0.715
5.决策树剪枝
[plain] view plain copy
/**
Next, we consider whether pruning the tree might lead to improved results. The function cv.tree() performs cross-validation in order to cv.tree() determine the optimal level of tree complexity; cost complexity pruning is used in order to select a sequence of trees for consideration.
For regression trees, only the default, deviance, is accepted. For classification trees, the default is deviance and the alternative is misclass (number of misclassifications or total loss).
We use the argument FUN=prune.misclass in order to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.
If the tree is regression tree,
> plot(cv. boston$size ,cv. boston$dev ,type=’b ’)
*/
> set . seed (3)
> cv. carseats =cv. tree(tree .carseats ,FUN = prune . misclass ,K=10)
//The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate(dev) and the value of the cost-complexity parameter used (k, which corresponds to α.
> names (cv. carseats )
[1] " size" "dev " "k" " method "
> cv. carseats
$size //the number of terminal nodes of each tree considered
[1] 19 17 14 13 9 7 3 2 1
$dev //the corresponding error rate
[1] 55 55 53 52 50 56 69 65 80
$k // the value of the cost-complexity parameter used
[1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000
2.0000000 4.2500000
[8] 5.0000000 23.0000000
$method //miscalss for classification tree
[1] " misclass "
attr (," class ")
[1] " prune " "tree. sequence "
[plain] view plain copy
//plot the error rate with tree node size to see whcih node size is best
> plot(cv. carseats$size ,cv. carseats$dev ,type=’b ’)
/**
Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error rate as a function of both size and k.
*/
> prune . carseats = prune . misclass ( tree. carseats , best =9)
> plot( prune . carseats )
> text( prune .carseats , pretty =0)
//get test error again to see whether the this pruned tree perform on the test data set
> tree.pred = predict ( prune . carseats , Carseats .test , type =" class ")
> table ( tree.pred ,High. test)
High. test
tree. pred No Yes
No 94 24
Yes 22 60
> (94+60) /200
[1] 0.77
数据分析咨询请扫描二维码
若不方便扫码,搜微信号:CDAshujufenxi
在数据成为新时代“石油”的今天,几乎每个职场人都在焦虑: “为什么别人能用数据驱动决策、升职加薪,而我面对Excel表格却无从 ...
2025-10-18数据清洗是 “数据价值挖掘的前置关卡”—— 其核心目标是 “去除噪声、修正错误、规范格式”,但前提是不破坏数据的真实业务含 ...
2025-10-17在数据汇总分析中,透视表凭借灵活的字段重组能力成为核心工具,但原始透视表仅能呈现数值结果,缺乏对数据背景、异常原因或业务 ...
2025-10-17在企业管理中,“凭经验定策略” 的传统模式正逐渐失效 —— 金融机构靠 “研究员主观判断” 选股可能错失收益,电商靠 “运营拍 ...
2025-10-17在数据库日常操作中,INSERT INTO SELECT是实现 “批量数据迁移” 的核心 SQL 语句 —— 它能直接将一个表(或查询结果集)的数 ...
2025-10-16在机器学习建模中,“参数” 是决定模型效果的关键变量 —— 无论是线性回归的系数、随机森林的树深度,还是神经网络的权重,这 ...
2025-10-16在数字化浪潮中,“数据” 已从 “辅助决策的工具” 升级为 “驱动业务的核心资产”—— 电商平台靠用户行为数据优化推荐算法, ...
2025-10-16在大模型从实验室走向生产环境的过程中,“稳定性” 是决定其能否实用的关键 —— 一个在单轮测试中表现优异的模型,若在高并发 ...
2025-10-15在机器学习入门领域,“鸢尾花数据集(Iris Dataset)” 是理解 “特征值” 与 “目标值” 的最佳案例 —— 它结构清晰、维度适 ...
2025-10-15在数据驱动的业务场景中,零散的指标(如 “GMV”“复购率”)就像 “散落的零件”,无法支撑系统性决策;而科学的指标体系,则 ...
2025-10-15在神经网络模型设计中,“隐藏层层数” 是决定模型能力与效率的核心参数之一 —— 层数过少,模型可能 “欠拟合”(无法捕捉数据 ...
2025-10-14在数字化浪潮中,数据分析师已成为企业 “从数据中挖掘价值” 的核心角色 —— 他们既要能从海量数据中提取有效信息,又要能将分 ...
2025-10-14在企业数据驱动的实践中,“指标混乱” 是最常见的痛点:运营部门说 “复购率 15%”,产品部门说 “复购率 8%”,实则是两者对 ...
2025-10-14在手游行业,“次日留存率” 是衡量一款游戏生死的 “第一道关卡”—— 它不仅反映了玩家对游戏的初始接受度,更直接决定了后续 ...
2025-10-13分库分表,为何而生? 在信息技术发展的早期阶段,数据量相对较小,业务逻辑也较为简单,单库单表的数据库架构就能够满足大多数 ...
2025-10-13在企业数字化转型过程中,“数据孤岛” 是普遍面临的痛点:用户数据散落在 APP 日志、注册系统、客服记录中,订单数据分散在交易 ...
2025-10-13在数字化时代,用户的每一次行为 —— 从电商平台的 “浏览→加购→购买”,到视频 APP 的 “打开→搜索→观看→收藏”,再到银 ...
2025-10-11在机器学习建模流程中,“特征重要性分析” 是连接 “数据” 与 “业务” 的关键桥梁 —— 它不仅能帮我们筛选冗余特征、提升模 ...
2025-10-11在企业的数据体系中,未经分类的数据如同 “杂乱无章的仓库”—— 用户行为日志、订单记录、商品信息混杂存储,CDA(Certified D ...
2025-10-11在 SQL Server 数据库操作中,“数据类型转换” 是高频需求 —— 无论是将字符串格式的日期转为datetime用于筛选,还是将数值转 ...
2025-10-10