This article mainly introduces examples of using http and https proxy for Python. It has certain reference value. Now I share it with you. Friends in need can refer to it.
Using Python in China to When crawling data on the Internet, some websites or API interfaces are speed-limited or blocked. In this case, using a proxy can speed up the crawling process and reduce request failures. The main ways for Python programs to use proxies are as follows:
(1)If you use some network libraries or crawler frameworks in the code to crawl data, generally such frameworks will support setting agents, for example:
import urllib.request as urlreq # 设置https代理 ph = urlreq.ProxyHandler({'https': 'https://127.0.0.1:1080'}) oper = urlreq.build_opener(ph) # 将代理安装到全局环境,这样所有请求都会自动使用代理 urlreq.install_opener(oper) res = oper.open("https://www.google.com") print(res.read())
import requests as req print(req.get("https://www.google.com", proxies={'https': 'https://127.0.0.1:1080'}).content)
(2)If the library used does not provide an interface for setting a proxy , but the bottom layer uses libraries such as urllib and requests. You can try to set the HTTP_PROXY and HTTPS_PROXY environment variables. Commonly used network libraries will automatically identify these environment variables and use the proxy set by the variables to initiate requests. The settings are as follows:
import os os.environ['http_proxy'] = 'http://127.0.0.1:1080' os.environ['https_proxy'] = 'https://127.0.0.1:1080'
(3)If the above two methods are useless, you can also use some tools that can monitor, intercept and modify network packets Use libraries such as Fiddler and mitmproxy to intercept http request packets and modify the address to achieve the effect of using a proxy.
Related recommendations:
Use python socket to send http(s) request method
The above is the detailed content of Example explanation of using http and https proxy in python. For more information, please follow other related articles on the PHP Chinese website!