1.分类和回归
定量输出称为回归,或者说是连续变量预测,预测明天的气温是多少度,这是一个回归任务
定性输出称为分类,或者说是离散变量预测,预测明天是阴、晴还是雨,就是一个分类任务
2.机器学习-K近邻
房价预测任务
酒店.png数据读取
import pandas as pd
features = ['accommodates','bedrooms','bathrooms','beds','price','minimum_nights','maximum_nights','number_of_reviews']
dc_listings = pd.read_csv('listings.csv')
dc_listings = dc_listings[features]
print(dc_listings.shape)
dc_listings.head()
数据特征:
- accommodates: 可以容纳的旅客,当做是房间的数量
- bedrooms: 卧室的数量
- bathrooms: 厕所的数量
- beds: 床的数量
- price: 每晚的费用
- minimum_nights: 客人最少租了几天
- maximum_nights: 客人最多租了几天
- number_of_reviews: 评论的数量
我有一个3个卧室的房子,租多少钱呢?
不知道的话,就去看看别人3个卧室的都租多少钱吧!
2.png
K近邻原理
3.png假设我们的数据源中只有5条信息,现在我想针对我的房子(只有一个房间)来定一个价格。
5.png
再综合考虑这三个我就得到了我的房子大概能值多钱啦!
import numpy as np
our_acc_value = 3
dc_listings['distance'] = np.abs(dc_listings.accommodates - our_acc_value)
#np.abs算绝对值,absolute value
dc_listings.distance.value_counts().sort_index()
#value_counts()统计值的个数,sort_index()按照索引排序,此时index是distance
dc_listings.head()
dc_listings.accommodates[:5]
dc_listings['accommodates'][:5]
这里我们只有了绝对值来计算,和我们距离为0的(同样数量的房间)有461个
sample操作可以得到洗牌后的数据
dc_listings = dc_listings.sample(frac=1,random_state=0)
#sample(frac=1,random_state=0)进行洗牌操作,fraction,frac=1选择了100%所有样本,random_state设置随机种子
dc_listings = dc_listings.sort_values('distance')#按照distance对样本进行升序排列
print(dc_listings.price.head())
dc_listings.head()
dc_listings.head()
print(dc_listings['price'].head() )
现在的问题是,这里面的数据是字符串呀,需要转换一下!
dc_listings['price'] = dc_listings.price.str.replace("\$|,",'').astype(float)
#str.replace()字符替换,astype()改变数据类型,"\$|,"\是转义符,|是或的意思。
mean_price = dc_listings.price.iloc[:5].mean()
mean_price
得到了平均价格,也就是我们的房子大致的价格了
模型的评估
训练集和测试集
7.png只考虑一个变量
def predict_price(new_listing_value,feature_column):#new_listing_value带预测样本的特征数据
temp_df = train_df
temp_df['distance'] = np.abs(train_df[feature_column] - new_listing_value) #np.abs求绝对值
temp_df = temp_df.sort_values('distance')
knn_5 = temp_df.price.iloc[:5]
predicted_price = knn_5.mean()
return(predicted_price)
print (test_df.accommodates.head())#查看测试集中前五个样本的accommodates
print (predict_price(1,feature_column='accommodates'))#预测价格
print (test_df.head(1).price)#第一个样本的真实价格
test_df['predicted_price'] = test_df.accommodates.apply(predict_price,feature_column='accommodates')
#series.apply(),没有axis参数,把每行数据传入predict_price
print (test_df[['predicted_price','price']])
误差评估
root mean squared error (RMSE)均方根误差
6.png
测试集总的均方根误差
test_df['squared_error'] = (test_df['predicted_price'] - test_df['price'])**(2)
mse = test_df['squared_error'].mean()
rmse = mse ** (1/2)
rmse #现在我们得到了对于一个变量的模型评估得分
不同的变量效果会不会不同呢?
for feature in ['accommodates','bedrooms','bathrooms','number_of_reviews']:
test_df['predicted_price'] = test_df[feature].apply(predict_price,feature_column=feature)
test_df['squared_error'] = (test_df['predicted_price'] - test_df['price'])**(2)
mse = test_df['squared_error'].mean()
rmse = mse ** (1/2)
print("RMSE for the {} column: {}".format(feature,rmse))
数据处理
import pandas as pd
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
features = ['accommodates','bedrooms','bathrooms','beds','price','minimum_nights','maximum_nights','number_of_reviews']
dc_listings = pd.read_csv('listings.csv')
dc_listings = dc_listings[features]
dc_listings['price'] = dc_listings.price.str.replace("\$|,",'').astype(float)
dc_listings = dc_listings.dropna() #去掉数据中的缺失值
#dc_listings[features] = StandardScaler().fit_transform(dc_listings[features]) # 标准化用 sklearn.preprocessing.StandardScaler()模块
dc_listings[features] = MinMaxScaler().fit_transform(dc_listings[features]) #归一化用 sklearn.preprocessing.MinMaxScaler()模块
normalized_listings = dc_listings
print(dc_listings.shape)
normalized_listings.head()
使用Sklearn来完成KNN
import sklearn
from sklearn.neighbors import KNeighborsRegressor
cols = ['accommodates','bedrooms']
knn = KNeighborsRegressor(n_neighbors=5) #默认n_neighbors=5,取前5个最相近的样本。
knn.fit(norm_train_df[cols], norm_train_df['price']) #传入训练集指标下的数据和标签
two_features_predictions = knn.predict(norm_test_df[cols])
#print(two_features_predictions)
from sklearn.metrics import mean_squared_error
two_features_mse = mean_squared_error(norm_test_df['price'], two_features_predictions)
two_features_rmse = two_features_mse ** (1/2)
print(two_features_rmse)
输出:0.04193612857354859
minmax_scaler=MinMaxScaler()
minmax_price_values=minmax_scaler.fit_transform(dc_listings.price.values.reshape(-1,1))
minmax_price_r_values=minmax_scaler.inverse_transform(two_features_predictions.reshape(-1,1))
dc_listings.price.values.reshape(-1,1)
输出:array([[0.05334282],
[0.12091038],
[0.01422475],
...,
[0.09423898],
[0.06009957],
[0.03556188]])
加入更多的特征
knn = KNeighborsRegressor(n_neighbors=5)
cols = ['accommodates','bedrooms','bathrooms','beds','minimum_nights','maximum_nights','number_of_reviews']
knn.fit(norm_train_df[cols], norm_train_df['price'])
seven_features_predictions = knn.predict(norm_test_df[cols])
seven_features_mse = mean_squared_error(norm_test_df['price'], seven_features_predictions)
seven_features_rmse = seven_features_mse ** (1/2)
print(seven_features_rmse)
得分值:0.041388747758587266
网友评论