0.基本分析
Kaggle入门赛题,预测Titanic的乘客是否能够获救的问题,根据已有的获救信息(train.csv),预测test.csv中乘客的获救情况,并将预测结果以gender_submission.csv 命名,上传到Kaggle网站,gender_submission.csv 只包含PassengerId、survival两列。
测试集和训练集特征简介:
特征 | 描述 | 取值 |
---|---|---|
PassengerId | 唯一标识 | |
survival | 是否获救 | 0 = No, 1 = Yes |
pclass | 船票种类 | 1 = 1st, 2 = 2nd, 3 = 3rd |
sex | 性别 | |
Age | 年龄 | |
sibsp | 船上兄弟/妻子的个数 | |
parch | 船上父母/孩子的个数 | |
ticket | 票号 | |
fare | 费用 | |
cabin | 座位号 | |
embarked | 登船港口 | C = Cherbourg, Q = Queenstown, S = Southampton |
其中,test.csv共有418条数据,train.csv共有891条数据。
1 特征工程
1.1 缺失值处理
将训练集和测试集进行合并后,查看缺失情况
data = pd.concat([train,test])
data = data.drop('PassengerId',axis=1)
missing_data = data.isnull().sum().sort_values(ascending=False)
缺失值如下:
Cabin 1014
Survived 418
Age 263
Embarked 2
Fare 1
Ticket 0
SibSp 0
Sex 0
Pclass 0
Parch 0
Name 0
dtype: int64
针对不同的特征分别进行处理,其中Survived为测试集缺失数据,不需要要进行补充
Age:平均值(mean)
Embarked:众数(mode)
Fare :中位数(median)
Cabin:貌似没有什么用途,就把这个特征删掉吧
# 填充众数的数据列
column_mode = ['Embarked']
for column in column_mode :
mode_val = data[column].mode()[0]
data[column].fillna(mode_val, inplace=True)
# 填充平均值
column_avg = ['Age']
for column in column_avg :
mean_val = data[column].mean()
data[column].fillna(mean_val, inplace=True)
# 填充中位数
column_median = ['Fare']
for column in column_median:
median_val = data['Fare'].median()
data[column].fillna(median_val,inplace=True)
1.2 特征衍生
- 新增亲人数量的特征,使用SibSp和Parch之和作为亲人的数量
- 新增Cabin的相关特征cabin_exist,如果Cabin不为空,则为True,否则为False
fig = plt.figure()
fig.set(alpha=0.2)
survived_nocabin = train.Survived[train.Cabin.isnull()].value_counts()
survived_cabin = train.Survived[train.Cabin.notnull()].value_counts()
df = pd.DataFrame({'有值':survived_cabin, '无值':survived_nocabin}).T
df.plot(kind='bar',stacked=True)
plt.title('Cabin 有无值的获救情况')
plt.xlabel('Cabin 有无值')
plt.ylabel('人数')
plt.show()
Cabin 是否有值获救的情况分析
-
将离散型的特征两两组合
data['relative'] = data.apply(lambda x : (int(x['SibSp']) + int(x['Parch'])) , axis=1) data['cabin_exist'] = data['Cabin'].notnull() # 随机特征 columns = ['Embarked','Pclass','Sex','cabin_exist'] total = len(columns) for index1 in range(total): for index2 in range(index1+1,total): print("{}_{}".format(columns[index1],columns[index2])) data["{}_{}".format(columns[index1],columns[index2])] = data.apply(lambda x:"{}_{}".format(x[columns[index1]],x[columns[index2]]),axis=1)
1.3 特征分箱
-
Age 分箱
# 对Age分箱 bins=[0,18,60,100] data['age_area'] = pd.cut(data['Age'],bins,labels=['child','adult','old'])
-
Fare 分箱
# 对Fare分箱 bins = [-1,100,300,600] data['fare_cut'] = pd.cut(data['Fare'],bins,labels=['one','two','three'])
1.4 特征转换
-
labelEncoder编码
column_label = ['Embarked','Sex','cabin_exist','age_area','fare_cut','Embarked_Pclass','Embarked_Sex','Embarked_cabin_exist','Pclass_Sex','Pclass_cabin_exist','Sex_cabin_exist'] le = LabelEncoder() for col in column_label: data[col] = le.fit_transform(data[col])
-
one_hot编码
column_dummies = ['Embarked','Sex','cabin_exist','age_area','fare_cut','Pclass','Embarked_Pclass','Embarked_Sex','Embarked_cabin_exist','Pclass_Sex','Pclass_cabin_exist','Sex_cabin_exist'] data = pd.get_dummies(data, columns=column_dummies)
-
标准化
column_sc = ['Fare','Age','relative','Parch','SibSp'] for column in column_sc: temp = data[column] MAX = temp.max() MIN = temp.min() d = data[column].apply(lambda x : (x - MIN) / (MAX - MIN)) data = data.drop(column, axis=1) data[column] = d
做到这里,基本上能想到的事情都做完了,接下来就直接预测吧
2 模型预测
首先,将我们训练集和合并集合并后的数据集拆分
# 将训练集和测试进行拆分
data_test = data[data['Survived'].isnull()]
data_test = data_test.drop(['Survived'],axis=1)
data_train= data[data['Survived'].notnull()]
然后,将训练集拆分成训练集和验证集
# 将训练集拆分为训练集和验证集
X_train, X_test, y_train, y_test = train_test_split(data_train_X, data_train_y, test_size=0.2, random_state=0)
最后选择预测模型,分类问题,首选LogisticRegression模型
# 使用LogisticRegression训练模型
from sklearn.linear_model import LogisticRegression
random_state = 2019
lr = LogisticRegression(random_state=random_state)
lr.fit(X_train,y_train)
通过验证集,验证训练模型准确率
# 使用模型预测验证集,并使用accuracy_score方计算准确率
from sklearn.metrics import accuracy_score
data_test_hat = lr.predict(X_test)
score = accuracy_score(y_test,data_test_hat)
验证集的accuracy为:0.8044692737430168
那就预测最终的结果吧,
data_predict = lr.predict(data_test)
并将预测结果合并成最终的提交结果
data = pd.DataFrame({'PassengerId':passengerId,'Survived':result},dtype=np.int64)
data.to_csv('result/01.Titanic Machine Learning from Disaster/gender_submission.csv.{}'.format(name),index=False)
3.上传结果
高高兴的提交,发现,准确率:0.78468 排名大概4000以后吧,看看排在第一的队伍到底是多少,准确率是:1.0 居然全部都预测正确了。but 没有那么简单,知道我发现《How to get a 1.000 》 可以在Titanic Survivors 网站 根据相关信息查找到测试数据集的目标特征 ,,ԾㅂԾ,,
还是继续努力提高吧!
参考链接:
1.How to get a 1.000
2.Titanic Survivors
3.Learning from the disaster: 99% Accuracy
4.Titanic: Machine Learning from Disaster
网友评论