美文网首页
《利用Python进行数据分析》第7章 轴连接与数据转换笔记

《利用Python进行数据分析》第7章 轴连接与数据转换笔记

作者: 龍猫君 | 来源:发表于2017-12-21 00:02 被阅读0次

    轴连接

    另一种数据合并运算也被称作连接(concatenation)、绑定(binding)或堆叠(stacking)。NumPy有一个用于合并原始NumPy数组的concatenation函数。

    In [2]: import pandas as pd
    
    In [3]: import numpy as np
    
    In [4]: arr=np.arange(12).reshape((3,4))
    
    In [5]: arr
    Out[5]: 
    array([[ 0, 1, 2, 3],
    [ 4, 5, 6, 7],
    [ 8, 9, 10, 11]])
    
    In [6]: np.concatenate([arr,arr],axis=1)
    Out[6]: 
    array([[ 0, 1, 2, 3, 0, 1, 2, 3],
    [ 4, 5, 6, 7, 4, 5, 6, 7],
    [ 8, 9, 10, 11, 8, 9, 10, 11]])
    

    对于pandas对象(如Series和DataFrame),带有标签的轴使你能够进一步推广数组的连接运算。具体点说,你还需要考虑以下这些东西:
    如果各对象其他轴上的索引不同,那些轴应该是做并集还是交集?
    结果对象中的分组需要各不相同吗?
    用于连接的轴重要吗?

    假设有三个没有重叠索引的Series

    In [7]: s1=Series([0,1],index=['a','b'])
    
    In [8]: s2 = Series([2, 3, 4], index=['c', 'd', 'e'])
    
    In [9]: s3 = Series([5, 6], index=['f', 'g'])
    

    对这些对象调用concat可以将值和索引粘合在一起

    In [11]: pd.concat([s1, s2, s3])
    Out[11]: 
    a 0
    b 1
    c 2
    d 3
    e 4
    f 5
    g 6
    dtype: int64
    

    默认情况下,concat是在axis=0上工作的,最终产生一个新的Series。如果传入axis=1,则结果就会变成一个DataFrame(axis=1是列)

    In [12]: pd.concat([s1, s2, s3],axis=1)
    Out[12]: 
    0 1 2
    a 0.0 NaN NaN
    b 1.0 NaN NaN
    c NaN 2.0 NaN
    d NaN 3.0 NaN
    e NaN 4.0 NaN
    f NaN NaN 5.0
    g NaN NaN 6.0
    

    另外一条轴上没有重叠,从索引的有序并集(外连接)上就可以看出来。传入join='inner'即可得到它们的交集

    In [13]: s4=pd.concat([s1*5,s3])
    
    In [14]: s4
    Out[14]: 
    a 0
    b 5
    f 5
    g 6
    dtype: int64
    
    In [15]: pd.concat([s1,s4],axis=1)
    Out[15]: 
    0 1
    a 0.0 0
    b 1.0 5
    f NaN 5
    g NaN 6
    
    In [16]: pd.concat([s1, s4], axis=1, join='inner')
    Out[16]: 
    0 1
    a 0 0
    b 1 5
    

    可以通过join_axes指定要在其他轴上使用的索引

    In [17]: pd.concat([s1, s4], axis=1, join_axes=[['a', 'c', 'b', 'e']])
    Out[17]: 
    0 1
    a 0.0 0.0
    c NaN NaN
    b 1.0 5.0
    e NaN NaN
    

    参与连接的片段在结果中区分不开。假设你想要在连接轴上创建一个层次化索引。使用keys参数即可达到这个目的

    In [18]: result = pd.concat([s1, s1, s3], keys=['one', 'two', 'three'])
    
    In [19]: result
    Out[19]: 
    one a 0
    b 1
    two a 0
    b 1
    three f 5
    g 6
    dtype: int64
    

    行转换为列

    In [20]: result.unstack()
    Out[20]: 
    a b f g
    one 0.0 1.0 NaN NaN
    two 0.0 1.0 NaN NaN
    three NaN NaN 5.0 6.0
    

    沿着axis=1对Series进行合并,则keys就会成为DataFrame的列头

    In [21]: pd.concat([s1, s2, s3], axis=1, keys=['one', 'two', 'three'])
    Out[21]: 
    one two three
    a 0.0 NaN NaN
    b 1.0 NaN NaN
    c NaN 2.0 NaN
    d NaN 3.0 NaN
    e NaN 4.0 NaN
    f NaN NaN 5.0
    g NaN NaN 6.0
    

    对DataFrame对象也是一样

    In [22]: df1 = DataFrame(np.arange(6).reshape(3, 2), index=['a', 'b', 'c'],
        ...: columns=['one', 'two'])
    
    In [23]: df2 = DataFrame(5 + np.arange(4).reshape(2, 2), index=['a', 'c'],
        ...: columns=['three', 'four'])
    
    In [24]: df1
    Out[24]: 
    one two
    a 0 1
    b 2 3
    c 4 5
    
    In [25]: df2
    Out[25]: 
    three four
    a 5 6
    c 7 8
    
    In [26]: pd.concat([df1,df2])
    Out[26]: 
    four one three two
    a NaN 0.0 NaN 1.0
    b NaN 2.0 NaN 3.0
    c NaN 4.0 NaN 5.0
    a 6.0 NaN 5.0 NaN
    c 8.0 NaN 7.0 NaN
    
    In [28]: pd.concat([df1,df2],keys=['level1','level2'])
    Out[28]: 
    four one three two
    level1 a NaN 0.0 NaN 1.0
    b NaN 2.0 NaN 3.0
    c NaN 4.0 NaN 5.0
    level2 a 6.0 NaN 5.0 NaN
    c 8.0 NaN 7.0 NaN
    
    In [29]: pd.concat([df1, df2], axis=1, keys=['level1', 'level2'])
    Out[29]: 
    level1 level2 
    one two three four
    a 0 1 5.0 6.0
    b 2 3 NaN NaN
    c 4 5 7.0 8.0
    

    如果传入的不是列表而是一个字典,则字典的键就会被当做keys选项的值

    In [30]: pd.concat({'level1': df1, 'level2': df2}, axis=1)
    Out[30]: 
    level1 level2 
    one two three four
    a 0 1 5.0 6.0
    b 2 3 NaN NaN
    c 4 5 7.0 8.0
    

    还有两个用于管理层次化索引创建方式的参数,见表所示

    In [31]: pd.concat([df1, df2], axis=1, keys=['level1', 'level2'],
        ...: names=['upper', 'lower'])
    Out[31]: 
    upper level1 level2 
    lower one two three four
    a 0 1 5.0 6.0
    b 2 3 NaN NaN
    c 4 5 7.0 8.0
    

    需要考虑的问题是,跟当前分析工作无关的DataFrame行索引

    In [32]: df1 = DataFrame(np.random.randn(3, 4), columns=['a', 'b', 'c', 'd'])
    
    In [33]: df2 = DataFrame(np.random.randn(2, 3), columns=['b', 'd', 'a'])
    
    In [34]: df1
    Out[34]: 
    a b c d
    0 1.617624 -1.218221 -0.426647 -1.251856
    1 0.166891 0.723824 -0.528937 -1.023203
    2 -1.687020 -1.998333 -0.112431 -0.231684
    
    In [35]: df2
    Out[35]: 
    b d a
    0 1.145881 -0.585634 1.664464
    1 -1.461537 -0.121653 -0.120717
    
    In [36]: pd.concat([df1, df2])
    Out[36]: 
    a b c d
    0 1.617624 -1.218221 -0.426647 -1.251856
    1 0.166891 0.723824 -0.528937 -1.023203
    2 -1.687020 -1.998333 -0.112431 -0.231684
    0 1.664464 1.145881 NaN -0.585634
    1 -0.120717 -1.461537 NaN -0.121653
    

    在这种情况下,传入ignore_index=True即可

    In [37]: pd.concat([df1, df2], ignore_index=True)
    Out[37]: 
    a b c d
    0 1.617624 -1.218221 -0.426647 -1.251856
    1 0.166891 0.723824 -0.528937 -1.023203
    2 -1.687020 -1.998333 -0.112431 -0.231684
    3 1.664464 1.145881 NaN -0.585634
    4 -0.120717 -1.461537 NaN -0.121653
    

    合并重叠数据

    还有一种数据组合问题不能用简单的合并(merge)或连接(concatenation)运算来处理。比如说,你可能有索引全部或部分重叠的两个数据集。给这个例子增加一点启发性,我们使用NumPy的where函数,它用于表达一种矢量化的if-else

    In [38]: a = Series([np.nan, 2.5, np.nan, 3.5, 4.5, np.nan],
        ...: index=['f', 'e', 'd', 'c', 'b', 'a'])
    
    In [39]: b = Series(np.arange(len(a), dtype=np.float64),
        ...: index=['f', 'e', 'd', 'c', 'b', 'a'])
    
    In [40]: a
    Out[40]: 
    f NaN
    e 2.5
    d NaN
    c 3.5
    b 4.5
    a NaN
    dtype: float64
    
    In [41]: b
    Out[41]: 
    f 0.0
    e 1.0
    d 2.0
    c 3.0
    b 4.0
    a 5.0
    dtype: float64
    
    In [42]: b[-1]=np.nan
    
    In [43]: b
    Out[43]: 
    f 0.0
    e 1.0
    d 2.0
    c 3.0
    b 4.0
    a NaN
    dtype: float64
    
    In [44]: np.where(pd.isnull(a), b, a)
    Out[44]: array([ 0. , 2.5, 2. , 3.5, 4.5, nan])
    
    In [45]: b[:2]
    Out[45]: 
    f 0.0
    e 1.0
    dtype: float64
    
    In [46]: b[:-2]
    Out[46]: 
    f 0.0
    e 1.0
    d 2.0
    c 3.0
    dtype: float64
    
    In [47]: a[2:]
    Out[47]: 
    d NaN
    c 3.5
    b 4.5
    a NaN
    dtype: float64
    

    Series有一个combine_first方法,实现的也是一样的功能,而且会进行数据对齐

    In [48]: b[:-2].combine_first(a[2:])
    Out[48]: 
    a NaN
    b 4.5
    c 3.0
    d 2.0
    e 1.0
    f 0.0
    dtype: float64
    

    对于DataFrame,combine_first自然也会在列上做同样的事情,因此你可以将其看做:用参数对象中的数据为调用者对象的缺失数据“打补丁”

    In [49]: df1 = DataFrame({'a': [1., np.nan, 5., np.nan],
        ...: 'b': [np.nan, 2., np.nan, 6.],
        ...: 'c': range(2, 18, 4)})
    
    In [50]: df2 = DataFrame({'a': [5, 4, np.nan, 3, 7],
        ...: 'b': [np.nan, 3, 4, 6, 8]})
    
    In [51]: df1
    Out[51]: 
    a b c
    0 1.0 NaN 2
    1 NaN 2.0 6
    2 5.0 NaN 10
    3 NaN 6.0 14
    
    In [52]: df2
    Out[52]: 
    a b
    0 5.0 NaN
    1 4.0 3.0
    2 NaN 4.0
    3 3.0 6.0
    4 7.0 8.0
    
    In [53]: df1.combine_first(df2)
    Out[53]: 
    a b c
    0 1.0 NaN 2.0
    1 4.0 2.0 6.0
    2 5.0 4.0 10.0
    3 3.0 6.0 14.0
    4 7.0 8.0 NaN
    

    重塑和轴向旋转

    有许多用于重新排列表格型数据的基础运算。这些函数也称作重塑(reshape)或轴向旋转(pivot)运算。

    重塑层次化索引

    层次化索引为DataFrame数据的重排任务提供了一种具有良好一致性的方式。主要功能有二:
    stack:将数据的列“旋转”为行。
    unstack:将数据的行“旋转”为列。
    看一个简单的DataFrame,其中的行列索引均为字符串

    In [54]: data = DataFrame(np.arange(6).reshape((2, 3)),
        ...: index=pd.Index(['Ohio', 'Colorado'], name='state'),
        ...: columns=pd.Index(['one', 'two', 'three'], name='number'))
    
    In [55]: data
    Out[55]: 
    number one two three
    state 
    Ohio 0 1 2
    Colorado 3 4 5
    

    使用该数据的stack方法即可将列转换为行,得到一个Series

    In [56]: result=data.stack()
    
    In [57]: result
    Out[57]: 
    state number
    Ohio one 0
    two 1
    three 2
    Colorado one 3
    two 4
    three 5
    dtype: int32
    

    对于一个层次化索引的Series,你可以用unstack将其重排为一个DataFrame

    In [58]: result.unstack()
    Out[58]: 
    number one two three
    state 
    Ohio 0 1 2
    Colorado 3 4 5
    

    默认情况下,unstack操作的是最内层(stack也是如此)。传入分层级别的编号或名称即可对其他级别进行unstack操作

    In [59]: result.unstack(0)
    Out[59]: 
    state Ohio Colorado
    number 
    one 0 3
    two 1 4
    three 2 5
    
    In [60]: result.unstack(1)
    Out[60]: 
    number one two three
    state 
    Ohio 0 1 2
    Colorado 3 4 5
    
    In [61]: result.unstack('state')
    Out[61]: 
    state Ohio Colorado
    number 
    one 0 3
    two 1 4
    three 2 5
    

    如果不是所有的级别值都能在各分组中找到的话,则unstack操作可能会引入缺失数据

    In [62]: s1 = Series([0, 1, 2, 3], index=['a', 'b', 'c', 'd'])
    
    In [63]: s2 = Series([4, 5, 6], index=['c', 'd', 'e'])
    
    In [64]: data2 = pd.concat([s1, s2], keys=['one', 'two'])
    
    In [65]: data2
    Out[65]: 
    one a 0
    b 1
    c 2
    d 3
    two c 4
    d 5
    e 6
    dtype: int64
    
    In [66]: data2.unstack()
    Out[66]: 
    a b c d e
    one 0.0 1.0 2.0 3.0 NaN
    two NaN NaN 4.0 5.0 6.0
    

    stack默认会滤除缺失数据,因此该运算是可逆的

    In [67]: data2.unstack().stack()
    Out[67]: 
    one a 0.0
    b 1.0
    c 2.0
    d 3.0
    two c 4.0
    d 5.0
    e 6.0
    dtype: float64
    In [69]: data2.unstack().stack(dropna=False)
    Out[69]: 
    one a 0.0
    b 1.0
    c 2.0
    d 3.0
    e NaN
    two a NaN
    b NaN
    c 4.0
    d 5.0
    e 6.0
    dtype: float64
    

    在对DataFrame进行unstack操作时,作为旋转轴的级别将会成为结果中的最低级别

    In [70]: df = DataFrame({'left': result, 'right': result + 5},
        ...: columns=pd.Index(['left', 'right'], name='side'))
    
    In [71]: df
    Out[71]: 
    side left right
    state number 
    Ohio one 0 5
    two 1 6
    three 2 7
    Colorado one 3 8
    two 4 9
    three 5 10
    
    In [72]: df.unstack()
    Out[72]: 
    side left right 
    number one two three one two three
    state 
    Ohio 0 1 2 5 6 7
    Colorado 3 4 5 8 9 10
    
    In [73]: df.unstack('state')
    Out[73]: 
    side left right 
    state Ohio Colorado Ohio Colorado
    number 
    one 0 3 5 8
    two 1 4 6 9
    three 2 5 7 10
    
    In [74]: df.unstack('state').stack()
    Out[74]: 
    side left right
    number state 
    one Ohio 0 5
    Colorado 3 8
    two Ohio 1 6
    Colorado 4 9
    three Ohio 2 7
    Colorado 5 10
    
    In [75]: df.unstack('state').stack('side')
    Out[75]: 
    state Colorado Ohio
    number side 
    one left 3 0
    right 8 5
    two left 4 1
    right 9 6
    three left 5 2
    right 10 7
    

    数据转换

    本章到目前为止介绍的都是数据的重排。另一类重要操作则是过滤、清理以及其他的转换工作。

    移除重复数据

    DataFrame中常常会出现重复行。下面就是一个例子

    In [83]: data = DataFrame({'k1': ['one'] * 3 + ['two'] * 4,
        ...: 'k2': [1, 1, 2, 3, 3, 4, 4]})
    
    In [84]: data
    Out[84]: 
    k1 k2
    0 one 1
    1 one 1
    2 one 2
    3 two 3
    4 two 3
    5 two 4
    6 two 4
    

    DataFrame的duplicated方法返回一个布尔型Series,表示各行是否是重复行

    In [85]: data.duplicated()
    Out[85]: 
    0 False
    1 True
    2 False
    3 False
    4 True
    5 False
    6 True
    dtype: bool
    

    drop_duplicates方法,它用于返回一个移除了重复行的Data-Frame

    In [87]: data.drop_duplicates()
    Out[87]: 
    k1 k2
    0 one 1
    2 one 2
    3 two 3
    5 two 4
    

    这两个方法默认会判断全部列,你也可以指定部分列进行重复项判断。假设你还有一列值,且只希望根据k1列过滤重复项

    In [88]: data['v1'] = range(7)
    
    In [89]: data['v1']
    Out[89]: 
    0 0
    1 1
    2 2
    3 3
    4 4
    5 5
    6 6
    Name: v1, dtype: int32
    
    In [90]: data.drop_duplicates(['k1'])
    Out[90]: 
    k1 k2 v1
    0 one 1 0
    3 two 3 3
    
    In [91]: data.drop_duplicates(['k1', 'k2'])
    Out[91]: 
    k1 k2 v1
    0 one 1 0
    2 one 2 2
    3 two 3 3
    5 two 4 5
    

    利用函数或映射进行数据转换

    在对数据集进行转换时,你可能希望根据数组、Series或DataFrame列中的值来实现该转换工作

    In [4]: data=DataFrame({'food':['bacon', 'pulled pork', 'bacon', 'Pastrami',
       ...: 'corned beef', 'Bacon', 'pastrami', 'honey ham','nova lox'],
       ...: 'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
    
    In [5]: data
    Out[5]: 
    food ounces
    0 bacon 4.0
    1 pulled pork 3.0
    2 bacon 12.0
    3 Pastrami 6.0
    4 corned beef 7.5
    5 Bacon 8.0
    6 pastrami 3.0
    7 honey ham 5.0
    8 nova lox 6.0
    

    假设你想要添加一列表示该肉类食物来源的动物类型。我们先编写一个肉类到动物的映射

    In [6]: meat_to_animal = {'bacon': 'pig','pulled pork': 'pig','pastrami': 'cow',
       ...: 'corned beef': 'cow','honey ham': 'pig','nova lox': 'salmon'}
    

    Series的map方法可以接受一个函数或含有映射关系的字典型对象,但是这里有一个小问题,即有些肉类的首字母大写了,而另一些则没有。因此,我们还需要将各个值转换为小写

    In [7]: data['animal']=data['food'].map(str.lower).map(meat_to_animal)
    In [8]: data
    Out[8]: 
              food  ounces  animal
    0        bacon     4.0     pig
    1  pulled pork     3.0     pig
    2        bacon    12.0     pig
    3     Pastrami     6.0     cow
    4  corned beef     7.5     cow
    5        Bacon     8.0     pig
    6     pastrami     3.0     cow
    7    honey ham     5.0     pig
    8     nova lox     6.0  salmon
    

    可以传入一个能够完成全部这些工作的函数

    In [9]: data['food'].map(lambda x: meat_to_animal[x.lower()])
    Out[9]: 
    0 pig
    1 pig
    2 pig
    3 cow
    4 cow
    5 pig
    6 cow
    7 pig
    8 salmon
    Name: food, dtype: object
    

    使用map是一种实现元素级转换以及其他数据清理工作的便捷方式。

    替换值

    利用fillna方法填充缺失数据可以看做值替换的一种特殊情况。虽然前面提到的map可用于修改对象的数据子集,而replace则提供了一种实现该功能的更简单、更灵活的方式。

    In [10]: data=Series([3,-888,6,-888,-1000,5])
    In [11]: data
    Out[11]: 
    0       3
    1    -888
    2       6
    3    -888
    4   -1000
    5       5
    dtype: int64
    

    -888这个值可能是一个表示缺失数据的标记值。要将其替换为pandas能够理解的NA值,我们可以利用replace来产生一个新的Series:

    In [12]: data.replace(-888,np.nan)
    Out[12]: 
    0 3.0
    1 NaN
    2 6.0
    3 NaN
    4 -1000.0
    5 5.0
    dtype: float64
    

    一次性替换多个值,可以传入一个由待替换值组成的列表以及一个替换值

    In [13]: data.replace([-888,-1000],np.nan)
    Out[13]: 
    0 3.0
    1 NaN
    2 6.0
    3 NaN
    4 NaN
    5 5.0
    dtype: float64
    

    对不同的值进行不同的替换,则传入一个由替换关系组成的列表

    In [14]: data.replace([-888,-1000],[np.nan,0])
    Out[14]: 
    0 3.0
    1 NaN
    2 6.0
    3 NaN
    4 0.0
    5 5.0
    dtype: float64
    

    传入的参数也可以是字典

    In [16]: data.replace({-888:0,-1000:np.nan})
    Out[16]: 
    0 3.0
    1 0.0
    2 6.0
    3 0.0
    4 NaN
    5 5.0
    dtype: float64
    

    重命名轴索引

    跟Series中的值一样,轴标签也可以通过函数或映射进行转换,从而得到一个新对象。轴还可以被就地修改,而无需新建一个数据结构

    In [17]: data=DataFrame(np.arange(12).reshape((3,4)),
        ...: index=['Ohio', 'Colorado', 'New York'],
        ...: columns=['one', 'two', 'three', 'four'])
    

    跟Series一样,轴标签也有一个map方法

    In [18]: data
    Out[18]: 
    one two three four
    Ohio 0 1 2 3
    Colorado 4 5 6 7
    New York 8 9 10 11
    

    可以将其赋值给index,这样就可以对DataFrame进行就地修改

    In [19]: data.index.map(str.upper)
    Out[19]: Index(['OHIO', 'COLORADO', 'NEW YORK'], dtype='object')
    
    In [20]: data.index=data.index.map(str.upper)
    
    In [21]: data
    Out[21]: 
    one two three four
    OHIO 0 1 2 3
    COLORADO 4 5 6 7
    NEW YORK 8 9 10 11
    

    想要创建数据集的转换版(而不是修改原始数据),比较实用的方法是rename

    In [22]: data.rename(index=str.title,columns=str.upper)
    Out[22]: 
    ONE TWO THREE FOUR
    Ohio 0 1 2 3
    Colorado 4 5 6 7
    New York 8 9 10 11
    

    rename可以结合字典型对象实现对部分轴标签的更新

    In [23]: data.rename(index={'OHIO':'INDIANA'},columns={'three':'peekaboo'})
    Out[23]: 
    one two peekaboo four
    INDIANA 0 1 2 3
    COLORADO 4 5 6 7
    NEW YORK 8 9 10 11
    

    rename帮我们实现了:复制DataFrame并对其索引和列标签进行赋值。如果希望就地修改某个数据集,传入inplace=True即可

    In [24]: _ = data.rename(index={'OHIO': 'INDIANA'}, inplace=True)
    
    In [25]: data
    Out[25]: 
    one two three four
    INDIANA 0 1 2 3
    COLORADO 4 5 6 7
    NEW YORK 8 9 10 11
    

    离散化和面元划分

    为了便于分析,连续数据常常被离散化或拆分为“面元”(bin)。假设有一组人员数据,而你希望将它们划分为不同的年龄组

    In [26]: ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
    
    In [27]: bins=[18,25,35,60,100]
    
    In [28]: cats=pd.cut(ages,bins)
    
    In [29]: cats
    Out[29]: 
    [(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], ..., (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]]
    Length: 12
    Categories (4, interval[int64]): [(18, 25] < (25, 35] < (35, 60] < (60, 100]]
    

    pandas返回的是一个特殊的Categorical对象。你可以将其看做一组表示面元名称的字符串。它含有一个表示一个为年龄数据进行标号的labels属性

    In [30]: cats.labels
    C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: FutureWarning: 'labels' is deprecated. Use 'codes' instead
    """Entry point for launching an IPython kernel.
    Out[30]: array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)
    
    In [31]: cats.codes
    Out[31]: array([0, 0, 0, 1, 0, 0, 2, 1, 3, 2, 2, 1], dtype=int8)
    
    
    In [35]: pd.value_counts(cats)
    Out[35]: 
    (18, 25] 5
    (35, 60] 3
    (25, 35] 3
    (60, 100] 1
    dtype: int64
    

    跟“区间”的数学符号一样,圆括号表示开端,而方括号则表示闭端(包括)。哪边是闭端可以通过right=False进行修改。

    In [36]: pd.cut(ages, [18, 26, 36, 61, 100], right=False)
    Out[36]: 
    [[18, 26), [18, 26), [18, 26), [26, 36), [18, 26), ..., [26, 36), [61, 100), [36, 61), [36, 61), [26, 36)]
    Length: 12
    Categories (4, interval[int64]): [[18, 26) < [26, 36) < [36, 61) < [61, 100)]
    

    可以设置自己的面元名称,将labels选项设置为一个列表或数组即可

    In [37]: group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']
    
    In [38]: pd.cut(ages, bins, labels=group_names)
    Out[38]: 
    [Youth, Youth, Youth, YoungAdult, Youth, ..., YoungAdult, Senior, MiddleAged, MiddleAged, YoungAdult]
    Length: 12
    Categories (4, object): [MiddleAged < Senior < YoungAdult < Youth]
    

    如果向cut传入的是面元的数量而不是确切的面元边界,则它会根据数据的最小值和最大值计算等长面元。下面这个例子中,我们将一些均匀分布的数据分成四组

    In [39]: data = np.random.rand(20)
    
    In [40]: pd.cut(data, 4, precision=2)
    Out[40]: 
    [(0.032, 0.26], (0.032, 0.26], (0.26, 0.49], (0.73, 0.96], (0.26, 0.49], ..., (0.032, 0.26], (0.49, 0.73], (0.032, 0.26], (0.49, 0.73], (0.49, 0.73]]
    Length: 20
    Categories (4, interval[float64]): [(0.032, 0.26] < (0.26, 0.49] < (0.49, 0.73] < (0.73, 0.96]]
    

    qcut是一个非常类似于cut的函数,它可以根据样本分位数对数据进行面元划分。根据数据的分布情况,cut可能无法使各个面元中含有相同数量的数据点。而qcut由于使用的是样本分位数,因此可以得到大小基本相等的面元

    In [41]: data = np.random.randn(1000) # 正态分布
    
    In [42]: cats = pd.qcut(data, 4) # 按四分位数进行切割
    
    In [43]: cats
    Out[43]: 
    [(-0.674, 0.0129], (0.755, 2.802], (0.0129, 0.755], (0.0129, 0.755], (-3.259, -0.674], ..., (0.0129, 0.755], (-3.259, -0.674], (-0.674, 0.0129], (-3.259, -0.674], (-0.674, 0.0129]]
    Length: 1000
    Categories (4, interval[float64]): [(-3.259, -0.674] < (-0.674, 0.0129] < (0.0129, 0.755] < (0.755, 2.802]]
    
    In [44]: pd.value_counts(cats)
    Out[44]: 
    (0.755, 2.802] 250
    (0.0129, 0.755] 250
    (-0.674, 0.0129] 250
    (-3.259, -0.674] 250
    dtype: int64
    

    跟cut一样,也可以设置自定义的分位数(0到1之间的数值,包含端点)

    In [45]: pd.qcut(data, [0, 0.1, 0.5, 0.9, 1.])
    Out[45]: 
    [(-1.289, 0.0129], (1.296, 2.802], (0.0129, 1.296], (0.0129, 1.296], (-3.259, -1.289], ..., (0.0129, 1.296], (-3.259, -1.289], (-1.289, 0.0129], (-1.289, 0.0129], (-1.289, 0.0129]]
    Length: 1000
    
    Categories (4, interval[float64]): [(-3.259, -1.289] < (-1.289, 0.0129] < (0.0129, 1.296] < (1.296, 2.802]]
    

    检测和过滤异常值

    异常值(outlier)的过滤或变换运算在很大程度上其实就是数组运算。来看一个含有正态分布数据的DataFrame

    In [46]: np.random.seed(12345)
    
    In [47]: data = DataFrame(np.random.randn(1000, 4))
    
    In [48]: data.describe()
    Out[48]: 
    0 1 2 3
    count 1000.000000 1000.000000 1000.000000 1000.000000
    mean -0.067684 0.067924 0.025598 -0.002298
    std 0.998035 0.992106 1.006835 0.996794
    min -3.428254 -3.548824 -3.184377 -3.745356
    25% -0.774890 -0.591841 -0.641675 -0.644144
    50% -0.116401 0.101143 0.002073 -0.013611
    75% 0.616366 0.780282 0.680391 0.654328
    max 3.366626 2.653656 3.260383 3.927528
    

    想要找出某列中绝对值大小超过3的值,则这样做

    In [49]: col=data[3]
    In [50]: col[np.abs(col)>3]
    Out[50]: 
    97 3.927528
    305 -3.399312
    400 -3.745356
    Name: 3, dtype: float64
    

    要选出全部含有“超过3或-3的值”的行,你可以利用布尔型DataFrame以及any方法

    In [51]: data[(np.abs(data)>3).any(1)]
    Out[51]: 
    0 1 2 3
    5 -0.539741 0.476985 3.248944 -1.021228
    97 -0.774363 0.552936 0.106061 3.927528
    102 -0.655054 -0.565230 3.176873 0.959533
    305 -2.315555 0.457246 -0.025907 -3.399312
    324 0.050188 1.951312 3.260383 0.963301
    400 0.146326 0.508391 -0.196713 -3.745356
    499 -0.293333 -0.242459 -3.056990 1.918403
    523 -3.428254 -0.296336 -0.439938 -0.867165
    586 0.275144 1.179227 -3.184377 1.369891
    808 -0.362528 -3.548824 1.553205 -2.186301
    900 3.366626 -2.372214 0.851010 1.332846
    

    根据这些条件,即可轻松地对值进行设置。下面的代码可以将值限制在区间-3到3以内

    In [52]: data[np.abs(data)>3]=np.sign(data)*3
    
    In [53]: data.describe()
    Out[53]: 
    0 1 2 3
    count 1000.000000 1000.000000 1000.000000 1000.000000
    mean -0.067623 0.068473 0.025153 -0.002081
    std 0.995485 0.990253 1.003977 0.989736
    min -3.000000 -3.000000 -3.000000 -3.000000
    25% -0.774890 -0.591841 -0.641675 -0.644144
    50% -0.116401 0.101143 0.002073 -0.013611
    75% 0.616366 0.780282 0.680391 0.654328
    max 3.000000 2.653656 3.000000 3.000000
    

    np.sign这个ufunc返回的是一个由1和-1组成的数组,表示原始值的符号
    排列和随机采样
    利用numpy.random.permutation函数可以轻松实现对Series或DataFrame的列的排列工作(permuting,随机重排序)。通过需要排列的轴的长度调用permutation,可产生一个表示新顺序的整数数组:

    In [54]: df = DataFrame(np.arange(5 * 4).reshape(5, 4))
    
    In [55]: df
    Out[55]: 
    0 1 2 3
    0 0 1 2 3
    1 4 5 6 7
    2 8 9 10 11
    3 12 13 14 15
    4 16 17 18 19
    
    In [56]: sampler = np.random.permutation(5)
    
    In [57]: sampler
    Out[57]: array([1, 0, 2, 3, 4])
    

    可以在基于ix的索引操作或take函数中使用该数组

    In [58]: df.take(sampler)
    Out[58]: 
    0 1 2 3
    1 4 5 6 7
    0 0 1 2 3
    2 8 9 10 11
    3 12 13 14 15
    4 16 17 18 19
    

    如果不想用替换的方式选取随机子集,则可以使用permutation:从permutation返回的数组中切下前k个元素,其中k为期望的子集大小。

    In [61]: df.take(np.random.permutation(len(df))[:3])
    Out[61]: 
    0 1 2 3
    1 4 5 6 7
    3 12 13 14 15
    4 16 17 18 19
    
    In [62]: df.take(np.random.permutation(len(df)))
    Out[62]: 
    0 1 2 3
    1 4 5 6 7
    3 12 13 14 15
    0 0 1 2 3
    2 8 9 10 11
    4 16 17 18 19
    
    In [63]: df.take(np.random.permutation(len(df))[:3])
    Out[63]: 
    0 1 2 3
    1 4 5 6 7
    0 0 1 2 3
    4 16 17 18 19
    

    要通过替换的方式产生样本,最快的方式是通过np.random.randint得到一组随机整数

    In [64]: bag = np.array([5, 8, -1, 6, 2])
    
    In [65]: bag
    Out[65]: array([ 5, 8, -1, 6, 2])
    
    In [66]: sampler = np.random.randint(0, len(bag), size=15)
    
    In [67]: sampler
    Out[67]: array([3, 0, 4, 1, 1, 2, 3, 0, 1, 2, 2, 3, 2, 1, 2])
    
    In [68]: draws = bag.take(sampler)
    
    In [69]: draws
    Out[69]: array([ 6, 5, 2, 8, 8, -1, 6, 5, 8, -1, -1, 6, -1, 8, -1])
    

    计算指标/哑变量

    另一种常用于统计建模或机器学习的转换方式是:将分类变量(categorical variable)转换为“哑变量矩阵”(dummy matrix)或“指标矩阵”(indicator matrix)。如果DataFrame的某一列中含有k个不同的值,则可以派生出一个k列矩阵或DataFrame(其值全为1和0)。pandas有一个get_dummies函数可以实现该功能。

    In [71]: df = DataFrame({'key': ['b', 'b', 'a', 'c', 'a', 'b'],
        ...: 'data1': range(6)})
    
    In [72]: df
    Out[72]: 
    data1 key
    0 0 b
    1 1 b
    2 2 a
    3 3 c
    4 4 a
    5 5 b
    
    In [73]: pd.get_dummies(df['key'])
    Out[73]: 
    a b c
    0 0 1 0
    1 0 1 0
    2 1 0 0
    3 0 0 1
    4 1 0 0
    5 0 1 0
    

    有时候,你可能想给指标DataFrame的列加上一个前缀,以便能够跟其他数据进行合并。get_dummies的prefix参数可以实现该功能:

    In [75]: dummies = pd.get_dummies(df['key'], prefix='key')
    
    In [76]: dummies
    Out[76]: 
    key_a key_b key_c
    0 0 1 0
    1 0 1 0
    2 1 0 0
    3 0 0 1
    4 1 0 0
    5 0 1 0
    
    In [77]: df_with_dummy = df[['data1']].join(dummies)
    
    In [78]: df_with_dummy
    Out[78]: 
    data1 key_a key_b key_c
    0 0 0 1 0
    1 1 0 1 0
    2 2 1 0 0
    3 3 0 0 1
    4 4 1 0 0
    5 5 0 1 0
    

    接下来练习字符串操作。

    相关文章

      网友评论

          本文标题:《利用Python进行数据分析》第7章 轴连接与数据转换笔记

          本文链接:https://www.haomeiwen.com/subject/ntfowxtx.html