美文网首页
scrapy爬虫常用的命令及scrapy的post总结

scrapy爬虫常用的命令及scrapy的post总结

作者: tkpy | 来源:发表于2018-07-31 17:29 被阅读0次

    创建一个爬虫项目

    scrapy startproject spider_name
    

    构建一个爬虫

    scrapy genspider baidu_spider www.baidu.com
    

    运行指定爬虫

    scrapy runspider 爬虫名称
    

    使爬虫从停止的地方开始爬取

    scrapy crawl 爬虫名 -s JOBDIR=crawls/爬虫名
    

    在cmd或者命令行中运行爬虫

    scrapy crawl 爬虫名
    

    scrapy post请求简书所搜功能

    import scrapy
    import json
    
    class JianshuSpider(scrapy.Spider):
        handle_httpstatus_list = [404]
        name = 'jianshu'
        allowed_domains = ['www.jianshu.com']
        headers = {
            "Host": "www.jianshu.com",
            "Connection": "keep-alive",
            "Content-Length": "0",
            "Accept": "application/json",
            "Origin": "https://www.jianshu.com",
            "X-CSRF-Token": "ftkf0tgVZjazuefhOQIGxF8hErgCVcx6ZzI0rc/gW8fnLXFlCMxvrmynQDnCaxfeSazU8FzkXLnNDKC04P/n1Q==",
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36",
            "Referer": "https://www.jianshu.com/search?utf8=%E2%9C%93&q=%E6%9A%B4%E9%9B%B7",
            "Accept-Encoding": "gzip, deflate, br",
            "Accept-Language": "zh-CN,zh;q=0.9",
    
        }
        cookies = {
            "signin_redirect":"https%3A%2F%2Fwww.jianshu.com%2Fsearch%3Futf8%3D%25E2%259C%2593%26q%3D%25E6%259A%25B4%25E9%259B%25B7",
            "read_mode":"day",
            "default_font":"font2",
            "locale":"zh-CN",
            "_m7e_session":"ef50c62444a30571485f70fc07580e0d",
            "Hm_lvt_0c0e9d9b1e7d617b3e6842e85b9fb068":"1533108867",
            "Hm_lpvt_0c0e9d9b1e7d617b3e6842e85b9fb068":"1533108867",
            "sajssdk_2015_cross_new_user":"1",
            "sensorsdata2015jssdkcross":"%7B%22distinct_id%22%3A%22164f468d0e73a8-0825d1e6f53621-47e1039-2073600-164f468d0e847a%22%2C%22%24device_id%22%3A%22164f468d0e73a8-0825d1e6f53621-47e1039-2073600-164f468d0e847a%22%2C%22props%22%3A%7B%7D%7D",
    
        }
        def start_requests(self):
            start_url = 'https://www.jianshu.com/search/do?q=%E6%9A%B4%E9%9B%B7&type=note&page=1&order_by=default'
             # 若携带数据data
            data = {
                    xxx:xxx,
                    xxx:xxx,
            }
            yield  scrapy.Request(
                     start_url,
                     callback=self.parse,
                     headers=self.headers,
                     cookies=self.cookies,
                     # formdata=data,
                     method='post')
    
        def parse(self,response):
            # json格式数据的解析
            sites = json.loads(response.body_as_unicode())
            print(sites)
    

    相关文章

      网友评论

          本文标题:scrapy爬虫常用的命令及scrapy的post总结

          本文链接:https://www.haomeiwen.com/subject/iqihvftx.html