淘宝模拟登陆抓取失败
# __author__ = ''
# -*- coding: utf-8 -*-
import requests
import re
s = requests.session()
login_data = {'email': 'xxx', 'password': 'xxx', }
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36',
'Host':'log.mmstat.com',
'Referer':'https://www.taobao.com/'
}
# post 数据实现登录
s.post('https://login.taobao.com/member/login.jhtml?redirectURL=https%3A%2F%2Fwww.taobao.com%2F', login_data, headers=headers)
# 验证是否登陆成功,抓取'淘宝'首页看看内容
r = s.get('https://www.taobao.com')
print r.text
还是小白
用户名和密码省去嘞
得到的还是未登录时的代码,不知道自己少了什么,有成功的大神能告知一下咩
Be careful to attach cookies when sending requests~
It is recommended to pay attention to the following points when simulating login:
Look at the request sent during normal login in the browser: #🎜🎜 #
- What fields were submitted?
- What cookies did you bring?
- Does the requested address have parameters?
The following is the crawler I used to crawl Yunnan University Library borrowing information. The default password for Yunda Library login system is the last eight digits of the student number.Then when submitting the form, it is not only the user name and password, but also a hidden
Anyway, we will take advantage of it~field. This field is written in the hidden input when the login form is generated, so it must be extracted; #🎜🎜 #In the address of the post form, there is also a jsessionid field, which you also need to extract and add from the login page;
ltIn short, I hope my solution ideas can give you some guidance.
So, a very, very important point---When the server detects no abnormalities between your simulated login information and the normally submitted information, the login is successful~
Imitate normal login Action
, keep thinking about it in the network debugging tool of the browser. Attached below is the simulated login crawler I wrote some time ago
By the way, why is it Taobao’s address== Just use the cookie you logged in with. Zhihu seems to have a verification code too