美文网首页
爬取北京市所有公交车站点经纬度

爬取北京市所有公交车站点经纬度

作者: 小白小白啦 | 来源:发表于2019-12-25 16:19 被阅读0次

    因为做个实验,需要北京市所有公交车站点的经纬度,就使用爬虫爬取了一下,为了加快速度用到了python 多进程,用这个代码爬取其他市的公交车站经纬度也是可行的。步骤分为两块,首先爬取北京市公交车站点的名称,然后调用百度的地址解析API爬取站点的经纬度。结果及代码https://github.com/FFGF/BeiJIngBusStation

    爬取北京市公交车站点

    爬取线路名称

    从这个https://beijing.8684.cn/网址爬取名称,先爬取线路名称,再爬取每条线路对应的站名

    lineName
    stationName
    代码如下,将爬取的线路存储为pick文件
    import requests
    import pandas as pd
    import pickle
    import os
    
    from bs4 import BeautifulSoup
    
    def readData(filePath):
        with open(filePath, 'rb') as f:
            return pickle.load(f)
    
    def writeData(filePath, data):
        with open(filePath, 'wb') as f:
            pickle.dump(data, f)
    
    baseUrl = 'https://beijing.8684.cn'
    urlList = ['1', '2', '3', '4', '5', '6', '7', '8', '9', 'B', 'C', 'D', 'F', 'G', 'H', 'K', 'L', 'M', 'P', 'S', 'T', 'X', 'Y', 'Z']
    
    def getBusLineName():
        """爬取北京市公交车线路名称
        :return:
        """
        result = []
        for url in urlList:
            tempUrl = baseUrl + "/list" + url
            html = requests.get(tempUrl)
            soup = BeautifulSoup(html.text, "html.parser")
            aTag = soup.select(".list > a")
            for a in aTag:
                result.append([a.text, a['href']])
        return result
    
    busLines = getBusLineName()
    busLinesPd = pd.DataFrame(columns=['lineName', 'url'], data=busLines)
    
    if os.path.exists('/data/busLinesPd.pkl'):
        writeData('./data/busLinesPd.pkl', busLinesPd)
    

    爬取站点名称

    代码如下,将结果存储到数据库中,如果爬取过程中报错,就再运行一次,已经爬取的数据会被过滤掉,只会爬取未爬取的数据。会在数据库中存储一个站点多次,因为一个站点出现在不同线路中,等爬取经纬度的时候去重即可。

    import requests
    import pickle
    import time
    import random
    from bs4 import BeautifulSoup
    import MySQLdb
    from multiprocessing import Pool
    
    def readData(filePath):
        """读取pickle文件
        :param filePath: 文件路径
        :return:
        """
        with open(filePath, 'rb') as f:
            return pickle.load(f)
    
    def writeData(filePath, data):
        """将data写入到filePath
        :param filePath: 路径
        :param data: 数据
        :return:
        """
        with open(filePath, 'wb') as f:
            pickle.dump(data, f)
    
    def writeMySql(data):
        """向数据库批量写入数据
        :param data: [[],[],[]...]
        :return:
        """
        db = MySQLdb.connect("localhost", "root", "", "busstation", charset='utf8')
        cursor = db.cursor()
        sql = """
        INSERT INTO stationname(line_name, url, station_name)
        VALUES (%s, %s, %s)
        """
        cursor.executemany(sql, data)
        db.commit()
        cursor.close()
        db.close()
        return
    
    def getExistLines():
        """获得已经写入数据库的公交站数据
        :return:
        """
        db = MySQLdb.connect("localhost", "root", "", "busstation", charset='utf8')
        cursor = db.cursor()
    
        sql = """
        select distinct(line_name) from stationname;
        """
        cursor.execute(sql)
        results = cursor.fetchall()
        db.commit()
        cursor.close()
        db.close()
        return results
    
    baseUrl = 'https://beijing.8684.cn'
    
    def getBusStationName(line):
        """获取每条线路line的公交车站名然后写入数据库
        :param line: 公交线路名称
        :return:
        """
        result = []
        tempUrl = baseUrl + line[1]
        try:
            # 随机休眠1-5秒,防止被拒绝
            time.sleep(random.randint(1, 5))
            html = requests.get(tempUrl)
        except:
            # 休眠20秒
            time.sleep(20)
            getBusStationName(line)
        soup = BeautifulSoup(html.text, "html.parser")
        liTag = soup.select(".bus-lzlist")[0].find_all("li")
        for li in liTag:
            result.append([line[0], line[1], li.text])
        writeMySql(result)
        print(line[0], line[1])
        return
    
    
    if __name__ == '__main__':
        wait_crawl = []
        existLines = getExistLines()
        existLines = [item[0] for item in existLines]
        busLinesPd = readData('./data/busLinesPd.pkl')
        # 将已经爬取的线路排除,爬取未爬取的数据,我本机需要运行这个函数两次,第一次爬取了一千八百多个线路,第二次爬取一百多个。一共一千九百多个线路
        for item in busLinesPd.values:
            if item[0] in existLines:
                continue
            wait_crawl.append([item[0], item[1]])
        p = Pool(4)
        p.map(getBusStationName, wait_crawl)
    

    数据库文件

    数据库名称为busstation,有两个表,一个存储站点,一个存储站点经纬度


    busstation

    建表语句

    CREATE TABLE `station_latlong` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `station_name` varchar(255) DEFAULT NULL,
      `lat` varchar(255) DEFAULT NULL,
      `lng` varchar(255) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=14066 DEFAULT CHARSET=utf8mb4;
    
    CREATE TABLE `stationname` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `line_name` varchar(255) DEFAULT NULL,
      `url` varchar(255) DEFAULT NULL,
      `station_name` varchar(255) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=50768 DEFAULT CHARSET=utf8mb4;
    

    爬取北京市公交车站点对应的经纬度

    代码如下

    import requests
    import time
    import MySQLdb
    from multiprocessing import Pool, Queue
    
    def getStationName():
        """获取公交站名称
        :return:
        """
        db = MySQLdb.connect("localhost", "root", "", "busstation", charset="utf8")
        cursor = db.cursor()
        sql = """
        select distinct(station_name) from stationname
        """
        cursor.execute(sql)
        results = cursor.fetchall()
        cursor.close()
        db.close()
        return results
    
    def getExistStation():
        """获得已经写入数据库的公交站数据
        :return:
        """
        db = MySQLdb.connect("localhost", "root", "", "busstation", charset='utf8')
        cursor = db.cursor()
    
        sql = """
       select distinct(station_name) from station_latlong
        """
        cursor.execute(sql)
        results = cursor.fetchall()
        db.commit()
        cursor.close()
        db.close()
        return results
    
    def writeMySql(queue):
        """向数据库批量写入数据
        :param data: [[],[],[]...]
        :return:
        """
        data = []
        while not queue.empty():
            data.append(queue.get())
        db = MySQLdb.connect("localhost", "root", "", "busstation", charset='utf8')
        cursor = db.cursor()
        sql = """
        INSERT INTO station_latlong(station_name, lat, lng)
        VALUES (%s, %s, %s)
        """
        cursor.executemany(sql, data)
        db.commit()
        cursor.close()
        db.close()
        return
    
    queues = [Queue() for _ in range(5)]
    
    def getStationLatLong(station):
        index = station[0]
        stationName = "北京市" + station[1] + "公交车站"
        urlTemplate = "http://api.map.baidu.com/geocoding/v3/?address={}&output=json&ak=你自己的ak&city=北京市'"
        # random.randint(1,5)
        html = requests.get(urlTemplate.format(stationName))
    
        latLongJson = html.json()['result']
        lat = latLongJson['location']['lat']
        long = latLongJson['location']['lng']
        queue = queues[index % 5]
        queue.put([station[1], lat, long])
        if queue.qsize() == 20: # 最开始的时候可以设置20,到最后几十条数据的时候需要设置为1
            writeMySql(queue)
            time.sleep(3)
    
    if __name__ == '__main__':
        pool = Pool(4)
        stationNames = getStationName()
        stationNames = set(item[0] for item in stationNames)
        existStations = getExistStation()
        existStations = set(item[0] for item in existStations)
        wait_crawl = stationNames - existStations
    
        stationNames = [[i,v] for i, v in enumerate(wait_crawl)]
        pool.map(getStationLatLong, stationNames)
    

    代码解释,我电脑有4个CPU所以开启四个进程,查看自己电脑CPU个数os.cpu_count(),然后为了加快插入数据库插入,创建5个Queue,然后队列满20个,批量插入到数据库。这样到数据最后只剩下一百条的时候你手动把代码if queue.qsize() == 20: 修改为if queue.qsize() == 1:即可。然后记得把代码urlTemplate = "http://api.map.baidu.com/geocoding/v3/?address={}&output=json&ak=你自己的ak&city=北京市'",修改为自己的ak

    获取百度ak

    创建,把刚刚创建的复制出来即可获取一个ak可以爬取六千条左右,再多了就要认证了,所以找同学借个账号,再爬。

    image.png

    结果

    stationname
    station_latlong

    参考资料

    相关文章

      网友评论

          本文标题:爬取北京市所有公交车站点经纬度

          本文链接:https://www.haomeiwen.com/subject/bapcoctx.html