Kaggle初探--房价预测案例之数据分析

作者: 小聪明李良才 | 来源:发表于2017-06-16 12:46 被阅读7184次

    概述

    本文数据来源kaggle的House Prices: Advanced Regression Techniques大赛。

    在做的过程中,浏览了好多出色的报告,受益匪浅,浏览的文章主要包括:

    import pandas as pd
    import numpy as np
    import seaborn as sns
    from scipy import stats
    from scipy.stats import skew
    from scipy.stats import norm
    import matplotlib.pyplot as plt
    from sklearn.preprocessing import StandardScaler
    from sklearn.manifold import TSNE
    from sklearn.cluster import KMeans
    from sklearn.decomposition import PCA
    from sklearn.preprocessing import StandardScaler
    
    # import warnings
    # warnings.filterwarnings('ignore')
    
    %config InlineBackend.figure_format = 'retina' #set 'png' here when working on notebook
    %matplotlib inline
    
    train_df = pd.read_csv("../input/train.csv")
    test_df = pd.read_csv("../input/test.csv")
    

    查看数据

    我们拿到数据后,先对数据要有个大致的了解,我们有1460的训练数据和1460的测试数据,数据的特征列有81个,其中35个是数值类型的,44个类别类型。

    我们通过阅读数据的描述说明,会发现列MSSubClass,OverallQual,OverallCond 这些数据可以将其转换为类别类型.

    但是去具体看OverallQual,OverallCond 的时候,其没有缺失列,可以当做int来处理

    all_df = pd.concat((train_df.loc[:,'MSSubClass':'SaleCondition'], test_df.loc[:,'MSSubClass':'SaleCondition']), axis=0,ignore_index=True)
    
    all_df['MSSubClass'] = all_df['MSSubClass'].astype(str)
    
    quantitative = [f for f in all_df.columns if all_df.dtypes[f] != 'object']
    qualitative = [f for f in all_df.columns if all_df.dtypes[f] == 'object']
    
    print("quantitative: {}, qualitative: {}" .format (len(quantitative),len(qualitative)))
    
    quantitative: 35, qualitative: 44
    

    处理缺失数据

    对于缺失值的处理

    1. 缺失的行特别对,弃用该列
    2. 缺失的值比较少,取均值
    3. 缺失的值中间,对于类别信息的列可以将缺失作为新的类别做 one-hot
    missing = all_df.isnull().sum()
    
    missing.sort_values(inplace=True,ascending=False)
    missing = missing[missing > 0]
    
    types = all_df[missing.index].dtypes
    
    percent = (all_df[missing.index].isnull().sum()/all_df[missing.index].isnull().count()).sort_values(ascending=False)
    
    missing_data = pd.concat([missing, percent,types], axis=1, keys=['Total', 'Percent','Types'])
    missing_data.sort_values('Total',ascending=False,inplace=True)
    missing_data
    
    image.png
    missing.plot.bar()
    
    <matplotlib.axes._subplots.AxesSubplot at 0x112096c88>
    
    output_14_1.png

    上述缺失的列中有6列大于了15%的缺失率,其余主要是 BsmtX 和 GarageX 两大类,我们在具体决定这些列的处理之前,我们来看下我们要预测的价格的一些特征

    数据统计分析

    单变量分析

    先看下我们要预测的价格的一些统计信息

    train_df.describe()['SalePrice']
    
    count      1460.000000
    mean     180921.195890
    std       79442.502883
    min       34900.000000
    25%      129975.000000
    50%      163000.000000
    75%      214000.000000
    max      755000.000000
    Name: SalePrice, dtype: float64
    
    #skewness and kurtosis
    print("Skewness: %f" % train_df['SalePrice'].skew())
    print("Kurtosis: %f" % train_df['SalePrice'].kurt())
    # 在统计学中,峰度(Kurtosis)衡量实数随机变量概率分布的峰态。峰度高就意味着方差增大是由低频度的大于或小于平均值的极端差值引起的。
    
    Skewness: 1.882876
    Kurtosis: 6.536282
    

    相关性

    我们先通过计算变量相关性,大致看下最相关的列都有什么

    corrmat = train_df.corr()
    
    #saleprice correlation matrix
    k = 10 #number of variables for heatmap
    cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
    cm = np.corrcoef(train_df[cols].values.T)
    sns.set(font_scale=1.25)
    hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
    plt.show()
    
    output_21_0.png
    ## 同时是相关性列,也是缺失数据的
    missing_data.index.intersection(cols)
    
    Index(['GarageCars', 'GarageArea', 'TotalBsmtSF'], dtype='object')
    
    missing_data.loc[missing_data.index.intersection(cols)]
    
    image.png

    从上面最相关的图中,我们可以首先将缺失的数据都给删除的

    #dealing with missing data
    all_df = all_df.drop((missing_data[missing_data['Total'] > 1]).index,1)
    # df_train = df_train.drop(df_train.loc[df_train['Electrical'].isnull()].index)
    all_df.isnull().sum().max() #just checking that there's no missing data missing...
    # 对于missing 1 的我们到时候已平均数填充
    

    正态概率图 (normal probability plot)

    #histogram and normal probability plot
    sns.distplot(train_df['SalePrice'], fit=norm);
    fig = plt.figure()
    res = stats.probplot(train_df['SalePrice'], plot=plt)
    
    output_27_0.png output_27_1.png

    一个好的处理方法就是进行log

    train_df['SalePrice'] = np.log(train_df['SalePrice'])
    
    #histogram and normal probability plot
    sns.distplot(train_df['SalePrice'], fit=norm);
    fig = plt.figure()
    res = stats.probplot(train_df['SalePrice'], plot=plt)
    
    output_30_0.png output_30_1.png

    看下每个定量变量的分布图

    quantitative = [f for f in all_df.columns if all_df.dtypes[f] != 'object']
    qualitative = [f for f in all_df.columns if all_df.dtypes[f] == 'object']
    print("quantitative: {}, qualitative: {}" .format (len(quantitative),len(qualitative)))
    
    quantitative: 30, qualitative: 26
    
    f = pd.melt(all_df, value_vars=quantitative)
    g = sns.FacetGrid(f, col="variable",  col_wrap=2, sharex=False, sharey=False)
    g = g.map(sns.distplot, "value")
    
    output_33_0.png

    上面有些数据是类似于正态分布的,我们可以对其进行log操作了提升质量的,有些则不适合,合适的预选对象有LotArea,BsmtUnfSF,1stFlrSF,TotalBsmtSF,KitchenAbvGr

    我们计算下我们定量数据的偏度

    all_df[quantitative].apply(lambda x: skew(x.dropna())).sort_values(ascending=False)
    
    MiscVal          21.947195
    PoolArea         16.898328
    LotArea          12.822431
    LowQualFinSF     12.088761
    3SsnPorch        11.376065
    KitchenAbvGr      4.302254
    BsmtFinSF2        4.145323
    EnclosedPorch     4.003891
    ScreenPorch       3.946694
    OpenPorchSF       2.535114
    WoodDeckSF        1.842433
    1stFlrSF          1.469604
    BsmtFinSF1        1.424989
    GrLivArea         1.269358
    TotalBsmtSF       1.162285
    BsmtUnfSF         0.919351
    2ndFlrSF          0.861675
    TotRmsAbvGrd      0.758367
    Fireplaces        0.733495
    HalfBath          0.694566
    OverallCond       0.570312
    BedroomAbvGr      0.326324
    GarageArea        0.241176
    OverallQual       0.197110
    MoSold            0.195884
    FullBath          0.167606
    YrSold            0.132399
    GarageCars       -0.218260
    YearRemodAdd     -0.451020
    YearBuilt        -0.599806
    dtype: float64
    

    定量特征分析

    方差分析或变方分析(Analysis of variance,简称 ANOVA)为数据分析中常见的统计模型

    train = all_df.loc[train_df.index]
    train['SalePrice'] = train_df.SalePrice
    
    def anova(frame):
        anv = pd.DataFrame()
        anv['feature'] = qualitative
        pvals = []
        for c in qualitative:
            samples = []
            for cls in frame[c].unique():
                s = frame[frame[c] == cls]['SalePrice'].values
                samples.append(s)
            pval = stats.f_oneway(*samples)[1]
            pvals.append(pval)
        anv['pval'] = pvals
        return anv.sort_values('pval')
    
    
    
    a = anova(train)
    a['disparity'] = np.log(1./a['pval'].values)
    sns.barplot(data=a, x='feature', y='disparity')
    x=plt.xticks(rotation=90)
    
    /Users/zhuanxu/anaconda/envs/linear_regression_demo/lib/python3.6/site-packages/scipy/stats/stats.py:2958: RuntimeWarning: invalid value encountered in double_scalars
      ssbn += _square_of_sums(a - offset) / float(len(a))
    
    output_38_1.png

    此处 stats.f_oneway 的作用是计算这种定性变量对于SalePrice的作用,如果GarageType的每个类别SalePrice的价格方差差不多,意味着该变量对于SalePrice就没什么作用,stats.f_oneway 返回的 pval > 0.05,基本就意味着量集合的相似,具体可以看

    下面对这些定性变量进行下处理,对齐进行数值编码,让他转换为定性的列

    def encode(frame, feature):
        ordering = pd.DataFrame()
        ordering['val'] = frame[feature].unique()
        ordering.index = ordering.val
        ordering['spmean'] = frame[[feature, 'SalePrice']].groupby(feature).mean()['SalePrice']
        ordering = ordering.sort_values('spmean')
        ordering['ordering'] = range(1, ordering.shape[0]+1)
        ordering = ordering['ordering'].to_dict()
        
        for cat, o in ordering.items():
            frame.loc[frame[feature] == cat, feature+'_E'] = o
        
    qual_encoded = []
    for q in qualitative:  
        encode(train, q)
        qual_encoded.append(q+'_E')
    print(qual_encoded)
    
    ['MSSubClass_E', 'Street_E', 'LotShape_E', 'LandContour_E', 'LotConfig_E', 'LandSlope_E', 'Neighborhood_E', 'Condition1_E', 'Condition2_E', 'BldgType_E', 'HouseStyle_E', 'RoofStyle_E', 'RoofMatl_E', 'Exterior1st_E', 'Exterior2nd_E', 'ExterQual_E', 'ExterCond_E', 'Foundation_E', 'Heating_E', 'HeatingQC_E', 'CentralAir_E', 'Electrical_E', 'KitchenQual_E', 'PavedDrive_E', 'SaleType_E', 'SaleCondition_E']
    
    # 选出了包含缺失数据的行,处理一下
    missing_data = all_df.isnull().sum()
    missing_data = missing_data[missing_data>0]
    ids = all_df[missing_data.index].isnull()
    # index (0), columns (1)
    all_df.loc[ids[ids.any(axis=1)].index][missing_data.index]
    
    image.png
    # 处理完后对于nan的数据,其值还是nan
    train.loc[1379,'Electrical_E']
    
    nan
    

    相关性计算

    def spearman(frame, features):
        spr = pd.DataFrame()
        spr['feature'] = features
        #Signature: a.corr(other, method='pearson', min_periods=None)
        #Docstring:
        #Compute correlation with `other` Series, excluding missing values
        # 计算特征和 SalePrice的 斯皮尔曼 相关系数
        spr['spearman'] = [frame[f].corr(frame['SalePrice'], 'spearman') for f in features]
        spr = spr.sort_values('spearman')
        plt.figure(figsize=(6, 0.25*len(features))) # width, height
        sns.barplot(data=spr, y='feature', x='spearman', orient='h')
        
    features = quantitative + qual_encoded
    spearman(train, features)
    
    output_45_0.png

    从上图我们可以看到特征 OverallQual Neighborhood GrLiveArea 对价格影响都比较大

    下面我们分析下特征列之间的相关性,如果两特征相关,在做回归的时候会导致共线性问题

    plt.figure(1)
    corr = train[quantitative+['SalePrice']].corr()
    sns.heatmap(corr)
    plt.figure(2)
    corr = train[qual_encoded+['SalePrice']].corr()
    sns.heatmap(corr)
    plt.figure(3)
    # [31,27]
    corr = pd.DataFrame(np.zeros([len(quantitative)+1, len(qual_encoded)+1]), index=quantitative+['SalePrice'], columns=qual_encoded+['SalePrice'])
    for q1 in quantitative+['SalePrice']:
        for q2 in qual_encoded+['SalePrice']:
            corr.loc[q1, q2] = train[q1].corr(train[q2])
    sns.heatmap(corr)
    
    <matplotlib.axes._subplots.AxesSubplot at 0x1172cb860>
    
    output_47_1.png output_47_2.png output_47_3.png

    Pairplots

    def pairplot(x, y, **kwargs):
        ax = plt.gca()
        ts = pd.DataFrame({'time': x, 'val': y})
        ts = ts.groupby('time').mean()
        ts.plot(ax=ax)
        plt.xticks(rotation=90)
        
    f = pd.melt(train, id_vars=['SalePrice'], value_vars=quantitative+qual_encoded)
    g = sns.FacetGrid(f, col="variable",  col_wrap=2, sharex=False, sharey=False, size=5)
    g = g.map(pairplot, "value", "SalePrice")
    
    IOPub data rate exceeded.
    The notebook server will temporarily stop sending output
    to the client in order to avoid crashing it.
    To change this limit, set the config variable
    `--NotebookApp.iopub_data_rate_limit`.
    

    从上面的数据我们能清晰的看到哪些变量是线性关系比较好的,哪些是非线性关系,还有一些能看到如果加二次项可能会表现出比较的线性相关性出来

    价格分段

    我们对于价格简单的做一个二分,然后看下特征的不同,我们先看下SalePrice的图

    a = train['SalePrice']
    a.plot.hist()
    
    <matplotlib.axes._subplots.AxesSubplot at 0x11ed529b0>
    
    output_51_1.png
    features = quantitative
    
    standard = train[train['SalePrice'] < np.log(200000)]
    pricey = train[train['SalePrice'] >= np.log(200000)]
    
    diff = pd.DataFrame()
    diff['feature'] = features
    diff['difference'] = [(pricey[f].fillna(0.).mean() - standard[f].fillna(0.).mean())/(standard[f].fillna(0.).mean())
                          for f in features]
    
    sns.barplot(data=diff, x='feature', y='difference')
    x=plt.xticks(rotation=90)
    

    ![Uploading output_52_0_342062.png . . .]

    上图可以看到贵的房子,泳池会影响比较大

    分类

    我们先对数据做一个简单的分类

    features = quantitative + qual_encoded
    model = TSNE(n_components=2, random_state=0, perplexity=50)
    X = train[features].fillna(0.).values
    tsne = model.fit_transform(X)
    
    std = StandardScaler()
    s = std.fit_transform(X)
    pca = PCA(n_components=30)
    pca.fit(s)
    pc = pca.transform(s)
    kmeans = KMeans(n_clusters=5)
    kmeans.fit(pc)
    
    fr = pd.DataFrame({'tsne1': tsne[:,0], 'tsne2': tsne[:, 1], 'cluster': kmeans.labels_})
    sns.lmplot(data=fr, x='tsne1', y='tsne2', hue='cluster', fit_reg=False)
    print(np.sum(pca.explained_variance_ratio_))
    
    0.838557886152
    
    output_55_1.png

    30个成分能覆盖83%的方差,整体看来,这种聚类方法不太好

    总结

    本文对数据进行了一些分析,下一篇会基于这个分析做模型处理

    相关文章

      网友评论

      • 6bdc3149b81a:你好想请问一下
        all_df = all_df.drop((missing_data[missing_data['Total'] > 1]).index,1)
        运行错误,是因为我的环境是python3.4吗?
      • 此刻花开luck:请问一下,有没有用R语言做的
        小聪明李良才: @此刻花开luck 你可以去kaggle网站查查

      本文标题:Kaggle初探--房价预测案例之数据分析

      本文链接:https://www.haomeiwen.com/subject/fswpqxtx.html