美文网首页大数据 爬虫Python AI Sqlpython入门基础学习
利用Python爬取房产数据!并在地图上显示!Python乃蒂花

利用Python爬取房产数据!并在地图上显示!Python乃蒂花

作者: 编程新视野 | 来源:发表于2019-01-08 13:38 被阅读2次

    JiwuspiderSpider.py

    注释:全面教程,入门书籍,学习源码可以添加小编学习群943752371直接获取。

    # -*- coding: utf-8 -*-

    from scrapy import Spider,Request

    import re

    from jiwu.items import JiwuItem

    class JiwuspiderSpider(Spider):

    name = "jiwuspider"

    allowed_domains = ["wlmq.jiwu.com"]

    start_urls = ['http://wlmq.jiwu.com/loupan']

    def parse(self, response):

    """

    解析每一页房屋的list

    :param response:

    :return:

    """

    for url in response.xpath('//a[@class="index_scale"]/@href').extract():

    yield Request(url,self.parse_html) # 取list集合中的url 调用详情解析方法

    # 如果下一页属性还存在,则把下一页的url获取出来

    nextpage = response.xpath('//a[@class="tg-rownum-next index-icon"]/@href').extract_first()

    #判断是否为空

    if nextpage:

    yield Request(nextpage,self.parse) #回调自己继续解析

    def parse_html(self,response):

    """

    解析每一个房产信息的详情页面,生成item

    :param response:

    :return:

    """

    pattern = re.compile('.*?lng = '(.*?)';.*?lat = '(.*?)';.*?bname = '(.*?)';.*?'

    'address = '(.*?)';.*?price = '(.*?)';',re.S)

    item = JiwuItem()

    results = re.findall(pattern,response.text)

    for result in results:

    item['name'] = result[2]

    item['address'] = result[3]

    # 对价格判断只取数字,如果为空就设置为0

    pricestr =result[4]

    pattern2 = re.compile('(d+)')

    s = re.findall(pattern2,pricestr)

    if len(s) == 0:

    item['price'] = 0

    else:item['price'] = s[0]

    item['lng'] = result[0]

    item['lat'] = result[1]

    yield item

    item.py

    # -*- coding: utf-8 -*-

    # Define here the models for your scraped items

    #

    # See documentation in:

    # http://doc.scrapy.org/en/latest/topics/items.html

    import scrapy

    class JiwuItem(scrapy.Item):

    # define the fields for your item here like:

    name = scrapy.Field()

    price =scrapy.Field()

    address =scrapy.Field()

    lng = scrapy.Field()

    lat = scrapy.Field()

    pass

    pipelines.py 注意此处是吧mongodb的保存方法注释了,可以自选选择保存方式

    # -*- coding: utf-8 -*-

    # Define your item pipelines here

    #

    # Don't forget to add your pipeline to the ITEM_PIPELINES setting

    # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html

    import pymongo

    from scrapy.conf import settings

    from openpyxl import workbook

    class JiwuPipeline(object):

    wb = workbook.Workbook()

    ws = wb.active

    ws.append(['小区名称', '地址', '价格', '经度', '纬度'])

    def __init__(self):

    # 获取数据库连接信息

    host = settings['MONGODB_URL']

    port = settings['MONGODB_PORT']

    dbname = settings['MONGODB_DBNAME']

    client = pymongo.MongoClient(host=host, port=port)

    # 定义数据库

    db = client[dbname]

    self.table = db[settings['MONGODB_TABLE']]

    def process_item(self, item, spider):

    jiwu = dict(item)

    #self.table.insert(jiwu)

    line = [item['name'], item['address'], str(item['price']), item['lng'], item['lat']]

    self.ws.append(line)

    self.wb.save('jiwu.xlsx')

    return item

    最后报表的数据

    mongodb数据库

    地图报表效果图:BDP分享仪表盘,分享可视化效果

    相关文章

      网友评论

        本文标题:利用Python爬取房产数据!并在地图上显示!Python乃蒂花

        本文链接:https://www.haomeiwen.com/subject/mvbvrqtx.html