并非所有结果/因变量都可以使用线性回归进行合理建模。也许第二种最常见的回归模型是逻辑回归,它适用于二元结果数据。如何计算逻辑回归模型的R平方?
麦克法登R平方
在R中,glm(广义线性模型)命令是用于拟合逻辑回归的标准命令。据我所知,拟合的glm对象并没有直接给你任何伪R平方值,但可以很容易地计算出McFadden的度量。为此,我们首先拟合我们感兴趣的模型,然后是仅包含截距的null模型。然后我们可以使用拟合模型对数似然值计算McFadden的R平方:
< - glm(y~x,family =“binomial”)\nnullmod < - glm(y~1,family =“binomial”)\n1-logLik(MOD)/ logLik(nullmod)"}"> mod < - glm(y~x,family =“binomial”) nullmod < - glm(y~1,family =“binomial”) 1-logLik(MOD)/ logLik(nullmod)
need-to-insert-img
为了了解预测器需要获得某个McFadden的R平方值的强度,我们将使用单个二进制预测器X来模拟数据, 我们首先尝试P(Y = 1 | X = 0)= 0.3和P(Y = 1 | X = 1)= 0.7:
< - 10000\nx < - 1 *( (n)<0.5)\npr < - (x == 1)* 0.7 +(x == 0)* 0.3\ny < - 1 *( f(n)< - glm(y~x,family =“binomial”)\nnullmod < - glm(y~1,family =“binomial”)\n1-logLik(MOD)/ (nullmod)\n'log Lik。' 0.1320256(df = 2)"}"> set.seed(63126) n < - 10000 x < - 1 *( (n)<0.5) pr < - (x == 1)* 0.7 +(x == 0)* 0.3 y < - 1 *( f(n)< - glm(y~x,family =“binomial”) nullmod < - glm(y~1,family =“binomial”) 1-logLik(MOD)/ (nullmod) 'log Lik。' 0.1320256(df = 2)
need-to-insert-img
因此,即使X对Y = 1的概率有相当强烈的影响,McFadden的R2也只有0.13。要增加它,我们必须使P(Y = 1 | X = 0)和P(Y = 1 | X = 1)更加不同:
< - 10000\nx < - 1 *(runif(n)<0.5)\npr < - (x == 1)* 0.9 +(x == 0)* 0.1\ny < - 1 *( (n)< - glm(y~x,family =“binomial”)\nnullmod < - glm(y~1,family =“binomial”)\n1- (MOD)/ (nullmod)\n[1] 0.5539419"}"> set.seed(63126) n < - 10000 x < - 1 *(runif(n)<0.5) pr < - (x == 1)* 0.9 +(x == 0)* 0.1 y < - 1 *( (n)< - glm(y~x,family =“binomial”) nullmod < - glm(y~1,family =“binomial”) 1- (MOD)/ (nullmod) [1] 0.5539419
need-to-insert-img
即使X将P(Y = 1)从0.1变为0.9,McFadden的R平方仅为0.55。最后我们将尝试0.01和0.99的值 - 我称之为非常强大的效果!
< - 10000\nx < - 1 *(runif(n)<0.5)\npr < - (x == 1)* 0.99 +(x == 0)* 0.01\ny < - 1 *( (n) pr)\nmod < - glm(y~x,family =“binomial”)\nnullmod < - glm(y~1,family =“binomial”)\n1- (MOD)/ ( )\n[1] 0.9293177"}"> set.seed(63126) n < - 10000 x < - 1 *(runif(n)<0.5) pr < - (x == 1)* 0.99 +(x == 0)* 0.01 y < - 1 *( (n) pr) mod < - glm(y~x,family =“binomial”) nullmod < - glm(y~1,family =“binomial”) 1- (MOD)/ ( ) [1] 0.9293177
need-to-insert-img
现在我们有一个更接近1的值。
分组二项数据与单个数据
data < - data.frame(s = c(700,300),f = c(300,700),x = c(0,1)) SFX 1 700 300 0 2 300 700 1
为了使逻辑回归模型适合R中的数据,我们可以将响应传递给glm函数, :
|z|) \n(Intercept) 0.84730 0.06901 12.28 <2e-16 ***\nx -1.69460 0.09759 -17.36 <2e-16 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\n(Dispersion parameter for binomial family taken to be 1)\n\n Null deviance: 3.2913e+02 on 1 degrees of freedom\nResidual deviance: 1.3323e-13 on 0 degrees of freedom\nAIC: 18.371\n\nNumber of Fisher Scoring iterations: 2"}"> Call: glm(formula = cbind(s, f) ~ x, family = "binomial", data = data) Deviance Residuals: [1] 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.84730 0.06901 12.28 <2e-16 *** x -1.69460 0.09759 -17.36 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 3.2913e+02 on 1 degrees of freedom Residual deviance: 1.3323e-13 on 0 degrees of freedom AIC: 18.371 Number of Fisher Scoring iterations: 2
need-to-insert-img
我们现在将分组的二项式数据转换为 伯努利 数据,并适合相同的逻辑回归模型。
<- (cbind(data,y=0),cbind(data,y=1))\nindividualData$freq <- individualData$s\nindividualData$freq[ $y==0] <- $f[individualData$y==0]\nmod2 <- glm(y~x, family=\"binomial\",data= ,weight=freq)\nsummary(mod2)\n\nCall:\nglm(formula = y ~ x, family = \"binomial\", data = individualData, \n weights = freq)\n\nDeviance Residuals: \n 1 2 3 4 \n-26.88 -22.35 22.35 26.88 \n\nCoefficients:\n Estimate Std. Error z value Pr(>|z|) \n(Intercept) 0.84730 0.06901 12.28 <2e-16 ***\nx -1.69460 0.09759 -17.36 <2e-16 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\n(Dispersion parameter for binomial family taken to be 1)\n\n Null deviance: 2772.6 on 3 degrees of freedom\nResidual deviance: 2443.5 on 2 degrees of freedom\nAIC: 2447.5\n\nNumber of Fisher Scoring iterations: 4"}"> individualData <- (cbind(data,y=0),cbind(data,y=1)) individualData$freq <- individualData$s individualData$freq[ $y==0] <- $f[individualData$y==0] mod2 <- glm(y~x, family="binomial",data= ,weight=freq) summary(mod2) Call: glm(formula = y ~ x, family = "binomial", data = individualData, weights = freq) Deviance Residuals: 1 2 3 4 -26.88 -22.35 22.35 26.88 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.84730 0.06901 12.28 <2e-16 *** x -1.69460 0.09759 -17.36 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 2772.6 on 3 degrees of freedom Residual deviance: 2443.5 on 2 degrees of freedom AIC: 2447.5 Number of Fisher Scoring iterations: 4
need-to-insert-img
正如所料,我们从分组数据框中获得相同的参数估计和推论。
<- glm(cbind(s,f)~1, family=\"binomial\",data)\nnullmod2 <- glm(y~1, family=\"binomial\",data=individualData, =freq)\n1-logLik(mod1)/logLik(nullmod1)\n'log Lik.' 0.9581627 (df=2)\n1-logLik(mod2)/logLik(nullmod2)\n'log Lik.' 0.1187091 (df=2)"}"> nullmod1 <- glm(cbind(s,f)~1, family="binomial",data) nullmod2 <- glm(y~1, family="binomial",data=individualData, =freq) 1-logLik(mod1)/logLik(nullmod1) 'log Lik.' 0.9581627 (df=2) 1-logLik(mod2)/logLik(nullmod2) 'log Lik.' 0.1187091 (df=2)
need-to-insert-img
我们看到分组数据模型的R平方为0.96,而单个数据模型的R平方仅为0.12。
网友评论