R有自己的计算样本距离(dist)和聚类(hclust)的方法/函数,各自包含多种算法据数据特征选择。另外在生态学vegan包vegdist函数也可计算距离,包含多种生态常用距离算法。hclust聚类后的对象可直接plot出图。ape包as.phylo结合write.tree可将聚类文件写成nwk文件。
1 输入数据
library(tidyverse) # data manipulation
library(cluster) # clustering algorithms
library(factoextra) # clustering visualization
library(dendextend) # for comparing two dendrograms
set.seed(1995)
# 随机种子
data=matrix(abs(round(rnorm(200, mean=0.5, sd=0.25), 2)), 20, 10)
# 随机正整数,20行,20列
colnames(data)=paste("Species", 1:10, sep=".")
# 列名-细菌
rownames(data)=paste("Sample", 1:20, sep=".")
注意:一行为一个样本
2 dist hclust plot
2.1 dist 计算样本距离
R dist 文档:
https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/dist
可选算法:
"euclidean", "maximum", "manhattan", "canberra", "binary" or "minkowski"
dist = dist(data, method = "euclidean")
2.2 hclust 聚类样本
R hclust 文档:
https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/hclust
可选算法:
"ward.D", "ward.D2", "single", "complete", "average" (= UPGMA), "mcquitty" (= WPGMA), "median" (= WPGMC) or "centroid" (= UPGMC).
hclust_dist = hclust(dist, method = "complete")
3.3 plot 聚类结果
plot(hclust_dist)
plot(hclust_dist, hang = -1)
rect.hclust(hclust_dist, k = 4, border = 1:4)
hang 末端对齐
k 切割
border 颜色
3.4 分组散点图:fviz_cluster
library("factoextra")
sub = cutree(hclust_dist, k=4)
fviz_cluster(list(data = data, cluster = sub))
3 vegdist hclust plot
library("vegan")
vegdist = vegdist(data, method = "bray")
hclust_vegdist = hclust(vegdist, method = "complete")
plot(hclust_vegdist)
4 hcut - 切树法一
https://rpkgs.datanovia.com/factoextra/reference/hcut.html
计算距离,聚类,切割一步完成
res <- hcut(data, k = 4, stand = TRUE,
hc_metric = "euclidean",
hc_func = "hclust",
hc_method = "complete")
stand: 是否进行数据标准化 scale()
k: 生成多少个cluster
hc_metric: 距离算法
hc_func:聚类函数
hc_method:距离算法
结果
fviz_dend(res, rect = TRUE)
聚类评估算法-轮廓系数:Silhouette Coefficient
si接近1,则说明样本i聚类合理;
si接近-1,则说明样本i更应该分类到另外的簇;
si 近似为0,则说明样本i在两个簇的边界上。
fviz_silhouette(res)
fviz_cluster(res)
5 cut - 切树法二
https://stackoverflow.com/questions/41992119/cut-a-dendrogram
dend <- as.dendrogram(hclust(dist(data)))
depth.cutoff <- 1
plot(dend, horiz = F)
abline(h=depth.cutoff, col="red", lty=2)
horiz: 水平树,或者垂直树
h: horizon 线
v: vertical 垂直线
cut(dend, h = depth.cutoff)
summary(cut(dend, h = depth.cutoff))
attr(cut(dend, h = depth.cutoff)$lower[[1]], "members")
attr(cut(dend, h = depth.cutoff)$lower[[2]], "members")
upper是cutoff上的cluster, lower是cutoff下的cluster.
打印cutoff以上的cluster (branch)
plot(cut(dend, h = depth.cutoff)$upper, horiz = F)
6 cutree - 切树法三
6.1 建树
vegdist = vegdist(data, method = "euclidean") # distance
hclust_vegdist = hclust(vegdist, method = "complete") # clust
plot(hclust_vegdist) # 观察,确定cutoff,或者k
depth.cutoff = 1.2
plot(hclust_vegdist)
abline(h=depth.cutoff, col="red", lty=2)
6.1 切树
cutree(hclust_vegdist, h = depth.cutoff)
6.3 结果整理
clusters = data.frame(cutree(hclust_vegdist, h = depth.cutoff))
colnames(clusters) = c("clusters")
clusters$samples = rownames(clusters)
clusters = data.frame(clusters[order(clusters$clusters, decreasing=F),])
6.4 核实对应关系
6.5 hub样本
fviz_cluster展示聚类后的样本分布
sub = cutree(hclust_dist, k=5)
fviz_cluster(list(data = data, cluster = sub))
获取其中一个cluster的hub sample
# 1 距离矩阵转成表格
df_dist = data.frame(as.matrix(vegdist))
# 2 整理 cluster1数据
target = clusters[clusters$clusters=="1",]$samples
target_dist = df_dist[c(target), c(target)]
dist_sum = apply(target_dist, 1, FUN=sum)
target_dist_sum = data.frame(samples = rownames(target_dist),
dist_sum = dist_sum)
target_dist_sum_sort = target_dist_sum[order(target_dist_sum$dist_sum, decreasing=F),]
7 比较树,比较距离或聚类算法
library(dendextend) # for comparing two dendrograms
hc1 <- hclust(vegdist(data, method = "bray"), method = "complete")
hc2 <- hclust(vegdist(data, method = "bray"), method = "average")
tanglegram(as.dendrogram(hc1), as.dendrogram(hc2))
dend1 = as.dendrogram(hc1)
dend2 = as.dendrogram(hc2)
dend_list <- dendlist(dend1, dend2)
tanglegram(dend1, dend2,
highlight_distinct_edges = FALSE, # Turn-off dashed lines
common_subtrees_color_lines = FALSE, # Turn-off line colors
common_subtrees_color_branches = TRUE, # Color common branches
main = paste("entanglement =", round(entanglement(dend_list), 2))
)
8 选择合适的聚类数
Elbow Method
fviz_nbclust(data, FUN = hcut, method = "wss")
Average Silhouette Method
fviz_nbclust(data, FUN = hcut, method = "silhouette")
Gap Statistic Method
library(cluster)
gap_stat <- clusGap(data, FUN = hcut, nstart = 25, K.max = 10, B = 50)
fviz_gap_stat(gap_stat)
9 kmeans聚类
set.seed(123)
# Data preparation
data("iris")
head(iris)
# Remove species column (5) and scale the data
iris.scaled <- scale(iris[, -5])
# K-means clustering
km.res <- kmeans(iris.scaled, 3, nstart = 10)
# Visualize kmeans clustering
# use repel = TRUE to avoid overplotting
fviz_cluster(km.res, iris[, -5], ellipse.type = "norm")
# Change the color palette and theme
fviz_cluster(km.res, iris[, -5],
palette = "Set2", ggtheme = theme_minimal())
tree转nwk,plot树形
library(ape)
tree = as.phylo(hclust_vegdist)
plot(tree, type="fan")
可选树形:
“phylogram”, “cladogram”, “fan”, “unrooted”, “radial”
write.tree为nwk
write.tree(phy=tree, file="tree.nwk")
# 保存为nwk
更多:
R语言:UPGMA聚类分析和树状图
R:层次聚类分析-dist、hclust、heatmap等
R语言鸢尾花iris数据集的层次聚类分析
cmdscale: Classical (Metric) Multidimensional Scaling
cmdscale 经典度量多为测量,也称PCOA,主坐标分析
Cluster with distance threshold in R
Hierarchical Clustering in R: Step-by-Step Example
Hierarchical Clustering in R
Hierarchical Cluster Analysis
推荐:树比较,最佳cluster数计算
网友评论