实战计划0430-石头的练习作业
练习要求
数据来源 数据处理运行效果:
抓取数据 数据处理实现代码
__author__ = 'daijielei'
'''
19课时练习,为11课时练习的加强版,之前从小猪短租处抓取一堆租房详情,19课时需要将其内容保存入DB中
'''
from bs4 import BeautifulSoup #解析html文档用
import requests #抓取html页面用
import time
import pymongo
client = pymongo.MongoClient('localhost',27017)
xiaozuDB = client['xiaozuDB']
sheet1 = xiaozuDB['sheet1']
'''
saveToDB
将需要保存的内容写入mongoDB
'''
def saveToDB(data):
sheet1.insert_one(data)
client.close()
'''
showxiaozuDB_payMore500
查看mongoDB中保存的数据,筛选出金额500以上的显示出来
'''
def showxiaozuDB_payMore500():
print(sheet1.count())
for item in sheet1.find({'pay':{'$gte':500}}):#筛选条件,将金额500以上的筛选出来
print(item)
client.close()
'''
cleanxiaozuDB
清除mangoDB中的数据
'''
def cleanxiaozuDB():
sheet1.delete_many({})
print(sheet1.count())
client.close()
'''
##################################################################################################################
######## ########
以下内容为第11课时完成,抓取小猪短租的相关信息
######## ########
##################################################################################################################
'''
#urls用来制定抓取范围,header该处无用处
header = {
'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36',
'Host':'bj.xiaozhu.com'
}
#getlist用来抓取对应网页上的每个列表的跳转url
#测试用url = "http://bj.xiaozhu.com/search-duanzufang-p1-0/"
def getlist(url):
webData = requests.get(url,headers=header)
soup = BeautifulSoup(webData.text,'lxml')
houselists = soup.select('ul > li > a')
for houselist in houselists:
listUrl = str(houselist.get('href'))
if(listUrl.find('http://bj.xiaozhu.com/fangzi/')!=-1):#确认链接内容为详情页,则进行详情页信息抓取
#print(listUrl)
getInfoPage(listUrl)
#getInfoPage用来抓取详情页里的相关信息,如标题、地址、金额、房屋图片、拥有人图片等等
#测试用url = "http://bj.xiaozhu.com/fangzi/525041101.html"
def getInfoPage(url):
webData = requests.get(url,headers=header)
soup = BeautifulSoup(webData.text,'lxml')
time.sleep(2)
titles = soup.select('body > div.wrap.clearfix.con_bg > div.con_l > div.pho_info > h4 > em')
address = soup.select('body > div.wrap.clearfix.con_bg > div.con_l > div.pho_info > p > span.pr5')
pays = soup.select('#pricePart > div.day_l > span')
houseimages = soup.select('#curBigImage')
ownerimages = soup.select('#floatRightBox > div.js_box.clearfix > div.member_pic > a > img')
ownernames = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > a')
ownerSexs = soup.select('#floatRightBox > div.js_box.clearfix > div.w_240 > h6 > span')
for title,addres,pay,houseimage,ownerimage,ownername,ownerSex in zip(titles,address,pays,houseimages,ownerimages,ownernames,ownerSexs):
data = {
'title':title.get_text(),
'addres':addres.get_text(),
'pay': int(pay.get_text()) if str((pay.get_text())).isdigit() else 0,
'houseimage':houseimage.get('src'),
'ownerimage':ownerimage.get('src'),
'ownername':ownername.get_text(),
'ownerSex':'female'
}
if('member_boy_ico' in str(ownerSex['class'])):#该处用来判断是男性还是女性,男性是通过标签来区分
data['ownerSex'] = "male"
print(data)
#saveToDB(data)
def startCatch():
urls = ["http://bj.xiaozhu.com/search-duanzufang-p{}-0/".format(str(i)) for i in range(1,5,1)]
for url in urls:
getlist(url)
#入口,该语句只有直接执行时会被调用
if __name__ == '__main__':
#startCatch()
showxiaozuDB_payMore500()
#cleanxiaozuDB()
笔记、总结、思考:
1、这节练习主要的重点有,存储到数据库
1、创建本地mongoDB的数据库连接
client = pymongo.MongoClient('localhost',27017)
2、创建名为'xiaozuDB'的数据库
xiaozuDB = client['xiaozuDB']
3、创建名为'sheet1'的集合(collection,和表类似,但是和表对比灵活多了)
sheet1 = xiaozuDB['sheet1']
4、往'sheet1'的表中写入数据,这里是直接把字典写进去
sheet1.insert_one(data)
2、筛选出金额为500以上的数据
1、从'sheet1'中筛选出数据,find函数用来寻找所有满足条件的数据,传入字典来包含寻找的条件
for item in sheet1.find({'pay':{'$gte':500}}):#筛选条件,将金额500以上的筛选出来
print(item)
网友评论