Home  >  Article  >  Backend Development  >  Detailed explanation of how to use python crawler tool Selenium

Detailed explanation of how to use python crawler tool Selenium

高洛峰
高洛峰Original
2017-03-08 11:25:092302browse

Introduction:

Ordinary urllib2 cannot be implemented when using python to crawl dynamic pages. For example, on the JD.com homepage below, new content will be loaded as the scroll bar is pulled down. , and urllib2 cannot crawl this content. At this time, today's protagonist selenium is needed.

Detailed explanation of how to use python crawler tool Selenium

Selenium is a tool for web application testing. Selenium tests run directly in the browser, just like real users. Supported browsers include IE, Mozilla Firefox, Mozilla Suite, etc. It is also very convenient to use it to crawl pages. You only need to follow the access steps to simulate a human operation. You don’t have to worry about Cookie and Session processing at all. It can even help you enter your account and password, and then click the login button. For the scroll above bar, you just need to scroll the browser to the bottom and save the page. The above functions are very useful when dealing with some anti-crawler mechanisms. Next, we will start the main text of our explanation and lead you to crawl a dynamic web page that requires login.

Case implementation:

To use Selnium, you need to select a calling browser and download the corresponding driver. In the desktop version, you can choose Chrome. FireFox, etc., you can use PhantomJS on the server side, and the desktop version can be directly called up in the browser to observe the changes, so generally we can change the browser to PhantomJS after debugging the desktop version with Chrome, etc. and then upload it to the server to run it, here We directly use PhantomJS for demonstration.

First import the module:

 from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
 from selenium import webdriver

Continue Initialize a browser when coming down. You can specify some attributes of the loaded web page in the parameters:

cap = webdriver.DesiredCapabilities.PHANTOMJS
cap["phantomjs.page.settings.resourceTimeout"] = 180
cap["phantomjs.page.settings.loadImages"] = False

driver = webdriver.PhantomJS(executable_path="/home/gaorong/phantomjs-2.1.1-linux-x86_64/bin/phantomjs", desired_capabilities=cap)

The above initializes PhantomJS and sets the path of the browser. The loading attribute selects the resource loading timeout. and don't load images (we only care about web text). You can also choose other settings here.

Set some properties and download a web page

driver.set_page_load_timeout(180)     
driver.get('//m.sbmmt.com/')
time.sleep(5)
driver.save_screenshot('./login.png')   #为便于调试,保存网页的截图

Since errors are inevitable when running on the server side, you can use save_screenshot to save the current web page for easy debugging.

The next step is to enter your account and password to log in to obtain the cookies of the website for subsequent requests.

#输入username和password 
driver.find_element_by_xpath("/html/body/div[1]/div[1]/login/div[2]/div/form/input[1]").send_keys('*****')   
time.sleep(1)
print 'input user success!!!'

driver.find_element_by_xpath("/html/body/div[1]/div[1]/login/div[2]/div/form/input[2]").send_keys('****')
time.sleep(1)
print 'input password success!!!'

driver.find_element_by_xpath("/html/body/div[1]/div[1]/login/div[2]/div/form/button").click()
time.sleep(5)

The above code uses find_element_by_xpath to get the location of the input box, enter the account and password, and click the login button. You can see that it is very convenient. It will automatically jump to the next page, we just need to sleep for a few seconds and wait for it.

The web page information we need to crawl is in a specific element, so we need to determine whether this element appears:

try:
     element = WebDriverWait(driver, 10).until(
         EC.presence_of_element_located((By.CLASS_NAME, 'pulses'))
     )
     print 'find element!!!'        
 except:
     print 'not find element!!!'
     print traceback.format_exc()
     driver.quit()

The above determines whether the element with class 'pulse' appears. If it does not appear after waiting for 10 seconds, selenum will cause a TimeoutError error.

Basic initialization has been carried out above, and then dynamic content needs to be processed. This web page, like JD.com, will automatically appear content with the pull-down, so we need to implement the drop-down scroll bar:

print 'begin scroll to get info page...'
t1 = time.time()
n = 60   #这里可以控制网页滚动距离
for i in range(1,n+1):
    s = "window.scrollTo(0,document.body.scrollHeight/{0}*{1});".format(n,i)
    #输出滚动位置,网页大小,和时间
    print s, len(driver.page_source),time.time()-t1
    driver.execute_script(s)
    time.sleep(2)

where driver.page_source is to get the text of the web page. When the scrolling is complete we can call it and write it to a file. This completes the program logic.

Advanced:

Using selenim can deal with common anti-crawler strategies, because it is equivalent to a person browsing the web, but additional processing is required for verification codes. , and another point is that the access speed cannot be too fast. After all, it needs to call a browser. If it is too slow, we can use it when necessary. If not necessary, we can use the requests library to operate.

Here are two blogs that you can refer to: Python Crawler Tool Five: Selenium Usage and Common Functions


The above is the detailed content of Detailed explanation of how to use python crawler tool Selenium. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn