#第一个爬虫
---
今天写了第一个爬虫,几点困难:
1. 开发环境设置: py3.5 vs py2.7,anaconda好像默认安装在2.7下面,导致最新版lxml库无法在py3.5里导入。**解决方案**使用anaconda单独建立py3.5环境,使用 $source activate py3.5 来激活。
2. html构成的理解: 今天爬了小猪,无需cookie信息即可浏览,所以header里无需加入cookie信息。但是对于html selector / xpath的应用还需挺多练习。目前仍然不太懂得如何抓取图片。
3. python本身的熟悉度,逐页抓取后如何保存在同一个列表里/如何存储图片到本地,这个都需对py本身有更多练习。
4. 写了第一个爬虫还是很开心的,希望继续努力。
---
**代码**:
~~~Python
#_*_ encoding: utf-8 _*_
from bs4 import BeautifulSoup
import requests
first_url = 'http://bj.xiaozhu.com/xicheng-305-9999yuan-duanzufang-9/?startDate=2016-07-01&endDate=2016-07-04'
urls = ['http://bj.xiaozhu.com/xicheng-duanzufang-p{}-8/?startDate=2016-07-01&endDate=2016-07-04'.format(str(i)) for i in range(1,9)]
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
}
data = []
def get_housing(url):
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text, 'lxml')
info = []
descs = soup.select('div.result_btm_con.lodgeunitname > div > a > span')
prices = soup.select('div.result_btm_con.lodgeunitname > span.result_price > i')
addresses = soup.select('div.result_btm_con.lodgeunitname > div > em')
for desc, price, address in zip(descs, prices, addresses):
data = {
'desc': desc.get_text(),
'price': price.get_text(),
'address': list(address.stripped_strings)
}
info.append(data)
for item in info:
item['address'][0] = item['address'][0][:2]
if len(item['address']) < 3: continue
item['address'][2] = item['address'][2].strip('-').strip()
return info
for url in urls:
data += get_housing(url)
total = 0
for item in data:
print(item)
total+=int(item['price'])
average = total / len(data)
print ('总房数:', len(data))
print ('平均房价:', average)
~~~
网友评论