刚刚完成了本周的作业,开始很奇怪除了推广和转转没有正常的商品了...询问之后就开始抓转转吧,整体感觉难度不大,较好的实践了本周的知识。
我的成果
Paste_Image.png我的代码
from bs4 import BeautifulSoup
import requests
import time
headers={'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'}
def get_info(url):
time.sleep(2)
wb_data=requests.get(url,headers=headers)
soup=BeautifulSoup(wb_data.text,'lxml')
titles=soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.box_left_top > h1')
cates=soup.select('#nav > div > span > a')
prices=soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.info_massege.left > div.price_li > span.price_now > i')
areas=soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.info_massege.left > div.palce_li > span > i')
pageviews=soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.box_left_top > p > span.look_time')
for title,cate,price,area,pageview in zip(titles,cates,prices,areas,pageviews):
data={
'title':title.get_text(),
'cate':cate.get_text(),
'price':price.get_text(),
'area':area.get_text(),
'pageview':pageview.get_text()
}
print(data)
def get_links():
url='http://bj.58.com/pbdn/'
page_data=requests.get(url)
soup=BeautifulSoup(page_data.text,'lxml')
links=soup.select('#infolist > div.infocon > table > tbody > tr > td.t > a')
urls=[]
for link in links:
if link.get('onclick')=="clickLog('from=zzpc_infoclick');":
info_link=link.get('href').split('?')[0]
urls.append(info_link)
return urls
urls=get_links()
for url in urls:
get_info(url)
总结
- 发现网页上已经没有发帖时间和成色了
- 排除推广页面的时候,用if语句判断了一个字段筛选
- 两个函数,一个抓链接,一个抓详情
网友评论