It is a very common requirement to use selenium to write a crawler to read web page content. However, you may encounter web pages that require you to log in to your account before crawling the content. For example, blogging to upload restricted books is an example of this. You will See the following chat window:
Since cookies will be used to record login information after logging in to an account, in order to read such a web page, we can log in manually first, and then export the cookie to a file. Later, when using selenium to crawl the web page, re-add the exported cookies, reorganize the web page, and the web content can be read normally.
Please use selenium to open the webpage you want to read first. Here is a blog as an example:
>>> from selenium import webdriver >>> driver = webdriver.Edge() >>> driver.get('https://www.books.com.tw')
At this time, please log in as a member according to the normal procedures, and then install the cookie_editor plug-in:
Please remember to switch back to the homepage of the blog and use the plug-in to export all cookies in JSON format:
It will copy the cookie content to the clipboard, please paste it to a text editor for archive.
Then close selenium and reopen it:
>>> from selenium import webdriver >>> driver = webdriver.Edge() >>> driver.get('https://www.books.com.tw')
To add cookies, you must be on a webpage in the same domain as the cookie, so remember to open the blog first to access the webpage. Then you can open the file that stores the cookie and load it into a Python dictionary:
>>> import json >>> with open('cookies.json') as f: ... cookies = json.load(f)
Add the cookies back one by one:
>>> for cookie in cookies: ... driver.add_cookie(cookie)
At this point you should see the following error:
Traceback (most recent call last): File "<stdin>", line 3, in <module> File "C:\Users\meebo\code\python\poetry_env\py310\.venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 670, in add_cookie assert cookie_dict["sameSite"] in ["Strict", "Lax", "None"] AssertionError
This is because in the data exported by cookie-editor, the sameSite attribute uses null to represent "no_restriction", which means there is no restriction and it must be the same website, but selenium only recognizes "Strict", "Lax", "None" These three restrictions are therefore diagnosed as errors. We must manually modify the JSON file and change all sameSite attribute values to "None" (note that they are strings). Please also check that the domain is not ".books.com." tw" cookie deletion:
[ { "domain": ".books.com.tw", "expirationDate": 1767941747.633402, "hostOnly": false, "httpOnly": false, "name": "_ga_TR763QQ559", "path": "/", "sameSite": null, "secure": false, "session": false, "storeId": null, "value": "GS1.1.1733381542.1.1.1733381747.0.0.0" }, ... { "domain": ".books.com.tw", "expirationDate": 1748933733, "hostOnly": false, "httpOnly": false, "name": "__eoi", "path": "/", "sameSite": "no_restriction", "secure": true, "session": false, "storeId": null, "value": "ID=7f42c4647467b5fb:T=1733381733:RT=1733381733:S=AA-AfjbpJCe1kw2klEX0xW55n9CY" }, ... ]
After modification, reload and add cookies and there will be no errors.
After adding cookies, the screen you see is still a non-logged-in screen:
The page must be refreshed for the cookie to take effect:
>>> driver.refresh()
What you see is the page to log in as a member:
In this way, you can use selenium to read pages that require member login.
The last thing to remind is that cookies are valid. If you cannot log in using the previously stored cookie after a period of time, just follow the above steps to obtain the cookie again.
The above is the detailed content of Use selenium to read web pages that require member login. For more information, please follow other related articles on the PHP Chinese website!