美文网首页
爬虫模拟登录

爬虫模拟登录

作者: tkpy | 来源:发表于2018-06-21 11:31 被阅读0次

    模拟登录心得:完全模拟浏览器的行为,得到链接,发送请求。

    遇到的问题

    1.get请求的链接可能与post请求的链接不一样

    2.验证码问题,get请求得到一个cookie,拿到这个cookie获取验证码的时候带着这个cookie,post请求的时候也带着这个cookie,目的是达到验证码同步

    3. header头信息不一样,get请求可以携带一个简单的header,post请求,使用抓包工具抓到header不要header中的cookie,使用get请求得到页面的cookie取请求。

    4.登录成功后跳转到登录成功界面,通过抓包工具抓到post请求的中header中的Referer的链接,使用get请求这个链接

    5.cookie,get请求得到的cookie和post请求得到的cookie进行合并,完全模拟抓包工具得到的cookie,向合并的cookie中添加抓到cookie中缺少的信息。

    6.通过F12调试工具分析验证码,得到正确的验证码地址。

    7.遇见过301错误,出现的原因是http和https的原因,403是可能是因为user-agent不对,重新设置一个user-agent。

    代码如下所示:

    import requests
    from lxml import etree
    from Mydb import Mydb
    import time,json
    from urllib.parse import urljoin
    import ssl
    '''
    乐助贷自动记账
    url_1: 登陆的链接
    url_2: 登陆成功后的链接
    登陆原理:第一次登陆请求得到cookie,带着cookie访问页面抓住数据
    '''
    
    def Xiaowei(url,url_1,url_2):
        headers = {
            "Host": "www.lezhudai.com",
            "Connection": "keep-alive",
            "X-Requested-With": "XMLHttpRequest",
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36",
            "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
            "Referer": "https://www.lezhudai.com/platform/",
            "Accept-Encoding": "gzip, deflate, br",
            "Accept-Language": "zh-CN,zh;q=0.9",
        }
        header = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36",
        }
        ssl._create_default_https_context = ssl._create_unverified_context
        # 获取登陆页面
        first_request = requests.get(url, headers=header)
        cookies = dict(first_request.cookies.items())
        print(cookies)
        captcha_url = 'https://www.lezhudai.com/?action=platform_service.get_captcha&t=0.014255888397257221'
        second_request = requests.get(captcha_url, headers=header, cookies=cookies)
        with open('./lezhudai.jpg', 'wb') as file:
            # BeautifulSoup(html.content)
            file.write(second_request.content)
        captcha = input('输入验证码:')
        data = {
            'username':'******',
            'password':'******',
            'captcha':captcha,
            'action':'home_service.get_time'
        }
    
        # pst登陆请求
        html = requests.post(url=url_1, data=data, headers=headers, allow_redirects=False,cookies=cookies)
        print(html.status_code)
        print(html.text)
        cookie = dict(html.cookies.items())
        # 合并两个cookie
        cookie = dict(cookie,**cookies)
        cookie['currentLocation'] = 'https%3A//www.lezhudai.com/%3Faction%3Dhome.index'
        cookie['checkedYibin'] = 'checked'
        cookie['activity-11'] = '11'
        cookie['Hm_lvt_78e1ddba02aaa0c64487ec1073c62800'] = str(int(time.time()))
        cookie['Hm_lpvt_78e1ddba02aaa0c64487ec1073c62800'] = str(int(time.time()))
        print(cookie)
        header['Referer'] = 'https://www.lezhudai.com/?action=platform.index'
        first_requests = requests.get(url_2, headers=header,cookies=cookie)
        print(first_requests.text)
        
    if __name__ == '__main__':
        # get获取登录界面的链接
        url = 'https://www.lezhudai.com/?action=platform.index'
        # post请求的链接
        url_1 = 'https://www.lezhudai.com/platform/?action=platform_service.logon '
        # 登录成功后跳转的链接
        url_2 = 'https://www.lezhudai.com/?action=home.index'
        Xiaowei(url,url_1,url_2)
    
    

    相关文章

      网友评论

          本文标题:爬虫模拟登录

          本文链接:https://www.haomeiwen.com/subject/yrdqyftx.html