受程序员群的影响(自己污的本性),他们总是带我开车,想想我也该收集一些资料了(美女图片)
代码
import requests
from lxml import etree
urls = ['http://jandan.net/ooxx/page-{}'.format(str(i)) for i in range(0,20)]
path = 'C://Users/Administrator/Desktop/煎蛋网/'
header = {
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
}
def get_photo(url):
html = requests.get(url)
selector = etree.HTML(html.text)
photo_urls = selector.xpath('//p/a[@class="view_img_link"]/@href')
for photo_url in photo_urls:
data = requests.get('http:'+photo_url,headers=header)
fp = open(path + photo_url[-10:],'wb')
fp.write(data.content)
fp.close()
for url in urls:
get_photo(url)
我以前做个视频,这次用不了,出错,爬虫只有进行时啊!!!!
结果
快上车(不行,我要面壁思过去了)
问题
我想爬简书所有用户,大家有什么想法可以私信我哦!!
网友评论