Detailed explanation of the usage of Requests library in Python

高洛峰
Release: 2017-03-17 17:34:09
Original
2090 people have browsed it

I talked about the use and method ofPython's urllib library, the basic use of Python network data collection Urllib library, and the advanced usage of Python's urllib.

Today we will learn how to use the Requests library in Python.

Installation of the Requests library

Use pip to install, if you have installed the pip package (a Python package management tool, I don’t know if you can use Baidu), or the integrated environment , such as Python (x, y) or anaconda, you can directly use pip to install the Python library.

$ pip install requests
Copy after login

After the installation is completed, let’s take a look at the basic method:

#get请求方法 >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')) #打印get请求的状态码 >>> r.status_code 200 #查看请求的数据类型,可以看到是json格式,utf-8编码 >>> r.headers['content-type'] 'application/json; charset=utf8' >>> r.encoding 'utf-8' #打印请求到的内容 >>> r.text u'{"type":"User"...' #输出json格式数据 >>> r.json() {u'private_gists': 419, u'total_private_repos': 77, ...}
Copy after login

Let’s take a look at a small chestnut:

#小例子 import requests r = requests.get('http://www.baidu.com') print type(r) print r.status_code print r.encoding print r.text print r.cookies '''请求了百度的网址,然后打印出了返回结果的类型,状态码,编码方式,Cookies等内容 输出:'''  200 UTF-8 
Copy after login

http basic request

requests library Provides all basic request methods of http. For example:

r = requests.post("http://httpbin.org/post") r = requests.put("http://httpbin.org/put") r = requests.delete("http://httpbin.org/delete") r = requests.head("http://httpbin.org/get") r = requests.options("http://httpbin.org/get")
Copy after login

Basic GET request

r = requests.get("http://httpbin.org/get") #如果想要加参数,可以利用 params 参数: import requests payload = {'key1': 'value1', 'key2': 'value2'} r = requests.get("http://httpbin.org/get", params=payload) print r.url #输出:http://httpbin.org/get?key2=value2&key1=value1
Copy after login

If you want to request a JSON file, you can use the json() method to parse it. For example, write a JSON file yourself and name it a.json with the following content:

["foo", "bar", { "foo": "bar" }] #利用如下程序请求并解析: import requests r = requests.get("a.json") print r.text print r.json() '''运行结果如下,其中一个是直接输出内容,另外一个方法是利用 json() 方法 解析,感受下它们的不同:''' ["foo", "bar", { "foo": "bar" }] [u'foo', u'bar', {u'foo': u'bar'}]
Copy after login

If you want to get the raw socket response from the server, you can get r.raw. However, stream=True needs to be set in the initial request.

r = requests.get('https://github.com/timeline.json', stream=True) r.raw #输出  r.raw.read(10) '\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03'
Copy after login

In this way, the original socket content of the web page is obtained.

If you want to add headers, you can pass headers parameters:

import requests payload = {'key1': 'value1', 'key2': 'value2'} headers = {'content-type': 'application/json'} r = requests.get("http://httpbin.org/get", params=payload, headers=headers) print r.url #通过headers参数可以增加请求头中的headers信息
Copy after login

BasicPOST request

For POST requests, we generally need to add some parameters. Then the most basic parameter passing method can use the data parameter.

import requests payload = {'key1': 'value1', 'key2': 'value2'} r = requests.post("http://httpbin.org/post", data=payload) print r.text #运行结果如下: { "args": {}, "data": "", "files": {}, "form": { "key1": "value1", "key2": "value2" }, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Content-Length": "23", "Content-Type": "application/x-www-form-urlencoded", "Host": "http://httpbin.org", "User-Agent": "python-requests/2.9.1" }, "json": null, "url": "http://httpbin.org/post" }
Copy after login

You can see that the parameters were passed successfully, and then the server returned the data we passed.

Sometimes the information we need to send is not in the form of a form. We need to send data in JSON format, so we can use the json.dumps() method to serialize the form data.

import json import requests url = 'http://httpbin.org/post' payload = {'some': 'data'} r = requests.post(url, data=json.dumps(payload)) print r.text #运行结果: { "args": {}, "data": "{\"some\": \"data\"}", "files": {}, "form": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Content-Length": "16", "Host": "http://httpbin.org", "User-Agent": "python-requests/2.9.1" }, "json": { "some": "data" }, "url": "http://httpbin.org/post" }
Copy after login

Through the above method, we can POST data in JSON format

If you want to upload a file, just use the file parameter directly:

#新建一个 test.txt 的文件,内容写上 Hello World! import requests url = 'http://httpbin.org/post' files = {'file': open('test.txt', 'rb')} r = requests.post(url, files=files) print r.text { "args": {}, "data": "", "files": { "file": "Hello World!" }, "form": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Content-Length": "156", "Content-Type": "multipart/form-data; boundary=7d8eb5ff99a04c11bb3e862ce78d7000", "Host": "http://httpbin.org", "User-Agent": "python-requests/2.9.1" }, "json": null, "url": "http://httpbin.org/post" }
Copy after login

In this way we will complete it successfully uploaded a file.

requests supports streaming uploads, which allows you to send large data streams or files without reading them into memory first. To use streaming upload, just provide a class fileobjectfor your request body, which is very convenient:

with open('massive-body') as f: requests.post('http://some.url/streamed', data=f)
Copy after login

Cookies

If a response contains cookie, then we can use cookiesVariableto get:

import requests url = 'Example Domain' r = requests.get(url) print r.cookies print r.cookies['example_cookie_name']
Copy after login

The above program is just a sample, you can use cookies variable to get the cookies of the site

In addition, you can use cookies variable to send cookie information to the server:

import requests url = 'http://httpbin.org/cookies' cookies = dict(cookies_are='working') r = requests.get(url, cookies=cookies) print r.text #输出: '{"cookies": {"cookies_are": "working"}}'
Copy after login

Timeout configuration

You can use the timeout variable to configure the maximum request time

requests.get(‘Build software better, together’, timeout=0.001)
Copy after login

Note: timeout is only valid for the connection process, and The download of the response body is irrelevant.

In other words, this time only limits the requested time. Even if the returned response contains a large amount of content, it will take some time to download.

Session Object

In the above requests, each request is actually equivalent to initiating a new request. This is equivalent to the effect of using a different browser to open each request separately. That is, it does not refer to a session, even if the same URL is requested. For example:

import requests requests.get('http://httpbin.org/cookies/set/sessioncookie/123456789') r = requests.get("http://httpbin.org/cookies") print(r.text) #结果是: { "cookies": {} }
Copy after login

Obviously, this is not in a session and cookies cannot be obtained. So what should we do if we need to maintain a persistent session on some sites? Just like using a browser to browse Taobao, jumping between different tabs actually creates a long-term session.

The solution is as follows:

import requests s = requests.Session() s.get('http://httpbin.org/cookies/set/sessioncookie/123456789') r = s.get("http://httpbin.org/cookies") print(r.text) #在这里我们请求了两次,一次是设置 cookies,一次是获得 cookies { "cookies": { "sessioncookie": "123456789" } }
Copy after login

It is found that cookies can be successfully obtained. This is to establish a session.

So since the session is a global variable, we can definitely use it for global configuration.

import requests s = requests.Session() s.headers.update({'x-test': 'true'}) r = s.get('http://httpbin.org/headers', headers={'x-test2': 'true'}) print r.text '''通过 s.headers.update 方法设置了 headers 的变量。然后我们又在请求中 设置了一个 headers,那么会出现什么结果?很简单,两个变量都传送过去了。 运行结果:''' { "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "http://httpbin.org", "User-Agent": "python-requests/2.9.1", "X-Test": "true", "X-Test2": "true" } }
Copy after login

What if the headers passed by the get method are also x-test?

r = s.get('http://httpbin.org/headers', headers={'x-test': 'true'}) #它会覆盖掉全局的配置: { "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "http://httpbin.org", "User-Agent": "python-requests/2.9.1", "X-Test": "true" } }
Copy after login

What if you don’t want a variable in the global configuration? It's easy, just set it to None.

r = s.get('http://httpbin.org/headers', headers={'x-test': None}) { "headers": { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Host": "http://httpbin.org", "User-Agent": "python-requests/2.9.1" } }
Copy after login

The above is the basic usage of session session.

SSL Certificate Verification

Now that you can see websites starting with https everywhere, Requests can verify SSL certificates for HTTPS requests, just like a web browser. To check the SSL certificate of a certain host, you can use the verify parameter, because the 12306 certificate was not invalid some time ago. Let’s test it:

import requests r = requests.get('https://kyfw.12306.cn/otn/', verify=True) print r.text #结果: requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Copy after login

Let’s try github’s :

import requests r = requests.get('Build software better, together', verify=True) print r.text
Copy after login

Well, normal request, because there is too much content, I won’t paste the output.

If we want to skip the certificate verification of 12306 just now, set verify to False:

import requests r = requests.get('https://kyfw.12306.cn/otn/', verify=False) print r.text
Copy after login

Once found, the request can be made normally. By default verify is True, so you need to set this variable manually if necessary.

Proxy

If you need to use a proxy, you can configure individual requests by providing the proxies parameter to any request method.

import requests proxies = { "https": "http://41.118.132.69:4433" } r = requests.post("http://httpbin.org/post", proxies=proxies) print r.text #也可以通过环境变量 HTTP_PROXY 和 HTTPS_PROXY 来配置代理 export HTTP_PROXY="http://10.10.1.10:3128" export HTTPS_PROXY="http://10.10.1.10:1080"
Copy after login


The above is the detailed content of Detailed explanation of the usage of Requests library in Python. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!