from sklearn.metrics import confusion_matrix
y_true = ["dog", "dog", "dog", "cat", "cat", "cat", "cat"]
y_pred = ["cat", "cat", "dog", "cat", "cat", "cat", "cat"]
C2 = confusion_matrix(y_actual, y_predict, labels=["dog","cat"])
tn, fp, fn, tp =C2.ravel()
print(C2)
print(C2.ravel())
#对于两类 的情况ctrl/case #labels 中顺序是 [FALSE,TRUE] [negative,positive]
##labels 可以时字符串也可以时数字,对应 标签 y_actual的值
#因为两类多是 患病或者对照样品这样的参考,如果写反了就错了
tn,fp,fn,tp 并不是 比率,而是数量
image.png下面的图和混淆矩阵内部位置不太一样 ,帮助理解
v2-ebfeb4f43468f0434b19dd9587d64782_720w.jpg
参考:
https://blog.csdn.net/wsljqian/article/details/99435808
sklearn.metricsconfusion_matrix
网友评论